Skip to content

Latest commit

 

History

History
3589 lines (2239 loc) · 402 KB

TAONAW.org

File metadata and controls

3589 lines (2239 loc) · 402 KB

TAONAW Hugo Version

#+MACRO instagram @@html:https://www.instagram.com/p/$1

Content

Help Desk Head Desk

INTRO

The main technology section. Every project starts here, eventually branching off to its own category.

Libre and Clothes

When I write, I live in Emacs (with the awesome Solorized theme) inside Org-Mode.

With time, I found that Org has already made me more effienet writer and note taker. I write notes in every meeting now, rather it’s my “turn” or not. I write notes as I work, about every solution and every problem I’m facing. I write first thing in the morning, usually about org-related thoughts I had as I wake up, over a cup of Sumatra cofee (little almond milk, one pack of sugar). Quite honestly, Org makes me feel good, because it’s transparent. It’s an extensions of my thoughts, continuing on one long line, uninterrupted, before I stop to think a second and relfect one what I was thinking (X-q).

There’s no pretending in Org. No fancy text, fonts, or even images. Style is only applied to function. It’s a delicate balance which, with the Solirized theme, work extremly well (by the way, the story of the man who created Solorized is quite interesting and worth reading). There’s also an inviting segue here about Linux, which is the Org-writing mindset extended into an operating system. But I’m joyfully digress.

Alright then. But every now on then you need to present stuff, and this means you need to “dress up” so other people can talk to you and relate. The “Normals,” so to speak, do not understand my Org dedication and often give me concerned looks when I type away, a single long line into a blank screen. The purity is empty, the lack of buttons and distracting elements feels threatening withough GUI guidance. Fine then, I can do fancy schmancy.

Most Org-folk I’ve read and listened to talk about LaTex. In my case, that meant a full installation, which is huge. Over 2GB huge. Not worth it for concessional usage, especially since I work in Microsoft environment at work, and most people I’ll share with will need .docx or .ppt format anyway. So for me, .odt seems like a better answer.

Two things are needed on my Emacs (version 25.2.1) for that:

  1. Download and install Libre Office. It comes built in with many personal desktop-geared Ubuntu distros, but in my version (Mint) I chose to opt out at start. OK, not biggie, full Libre office suite is only 100MB, and I can do that. I see myself edit the concessional Word file or producing a PDF.
  2. Add the following to .emacs to turn on .odt option in the export dispatcher:

    `(eval-after-load "org" '(require 'ox-odt nil t))`

Now I can create the .dot file, which I can open in Libre Writer I just downloaded. Ooof. Hello GUI word proccessing, with weird paper screen restrictions look. And the white, the white! It burnsss usss.... But anyway. Overall things look excellent, but if I want to change fonts, move around images etc, now I can in a more eyecandy format without leaving Linux. Then again, if I really need to produce a document, I might as well save my .odt in Writer to a .docx and remote remote into my work computer, where Mirosoft reigns supreme. Options. We like having them, yes?

Even another option I was considering is to use Typora, which is a pretty markdown writer. It comes with Pandoc, and can handle Word and PDF files. Typora does not exactly feel “Linux free” to me and seems heavily inspired by different “minimalist” Mac word processing apps, if that’s your thing. It probably won’t show in your distro’s and require installation from a ppa. I used Typora for a while for markdown, but we’ve parted ways now.

I’m curious how this will stand out when I present my notes (since I’ve became the unofficial note taker at work, for reasons mentioned above). But for now, I’m more than happy to take off the fancy clothes and slide back to my comfortable t-shirt and shorts and write in Org.

Agenda, and the Benfit of Having Multiple Files

I used to write all my tasks, personal and work, into one tasks.org file.

On Sunday night, this was good. I had 5 tasks on my list, and I was ready to start my work week. But it didn’t take long (two days actually) for tasks.org to become monster.org. It didn’t happen because of the number of tasks, which I kept (more or less) under control. It happened because of the size of the projects I was working on. Setting up computers, encryption, and even elementary personal stuff like paying my bills; each task naturally grew to sub-tasks, and those in turn had their own notes and lists.

My initial solution to that was to create a “Details” heading for the projects. It contained time rangers I worked on a project (entered manually), and links to other sources I needed. This kept extra information out of the way when I didn’t need it. This caused two issues. First, I now had “Details” showing on my agenda, since my time-range was directly under that header. Second, things quickly got out of control on my Android phone with Orgzly. That’s because Orgzly does not fold seondary headings. I had to deal with walls of texts which I had to scroll through before I got to the most recent ToDo items.

As I was scratching my head at this, work and life continued. My tasks list grew each day - No, each couple of hours. Interruptions kept coming in, obscure urgent new projects oppoped up while older ones from previous weeks resurfaced. My list was quickly overtaken by work stuff, while personal projects remained in the background, often pushed down the list and out of view.

Realizing that I can’t handle just viewing my tasks.org on my phone anymore (toward the end of the week, I could barely do it even in Emacs) I started using the tool I should have used more from the beginning: The agenda.

The Agenda

The agenda view changed everything. Opened from everywhere with a quick key combo, it enabled me to see everything that I need to do. This is thanks to one thing that survived through the mess was my method of scheduling assignments I intended to work on that day or in the next couple of hours. I picked this habit from one of the old Org tutorials floating around, which I cannot find right now. Scheduling means I still had a wall of ToDo items in tasks.org, but I only scheduled up to 5 things I intended to work on. I am only human, after all. Scheduling showed me what I wanted to do, and ToDo keywords showed me assignments that I haven’t yet scheduled, but need to at some point. I could view both comfortably from the agenda view with C-c a n.

This proved to be effective not only in Emacs, but also in Orgzly, on my phone. Thanks to the “Scheduled” search, I now also have a widget on my homescreen, an affective todo list. There’s even a check button to check off Items on this widget, which marks them as “done.” Orgzly also does a good job at creating customized searches, so that I can specifically see what is scheduled for today vs what is scheduled for the next 3 days, or week. I can have another filter showing me my unscheduled ToDos as well, in case I want to start working on them. What a wonderful thing.

Working with the agenda, I realized I’m faster than before. At work, the first thing I do after I launch Emacs is to get into agenda view. From there, it is much faster to “tab in” to whatever task I need. This replaced my need for C-x C-r, recentf, since now I was not only in the file I need, but also in the section I need. Even better: C-x n s can be used then to “zoom in” to the task at hand, blocking out the long list of other items.

In agenda, I was also able to quickly see tags and categories, edit properties, and most importantly, quickly schedule ToDos. As the agenda became my bread and butter, another small issue surfaced: the category property. Since I still used one file, I used the category property to differentiate between personal and work tasks. On the agenda, each schedule task was placed in a “Tasks” category by default, since that was the file I was using. Even when I did assigned a category from the agenda, I still had to do so for a child header (I thought these are supposed to be inharied, but this didn’t seem to work). This may sound like aesthetics, but being able to filter out all personal/work tasks can be very handy.

I was also thinking of my issue of having multiple values for one property. This was a problem I was trying to solve for the last several weeks. As an example, consider a task of setting up several computers, where extra information such as serial numbers and models is needed. Up to that point, I had a header nested inside my task.org file, like “set up 4 computers.” I used the custom property :Serials: followed by the serial numbers for set-up tasks: :Serials: 1111 2222 3333 4444 etc. This didnt’ work well, since Org considered the whole thing, including spaces, as the value of the property “Serials.” I could still search, using Swiper (Ivy is one of the first things I install), for the serial as text, but anything that has to do with properties as functions did not work. I asked about this in reddit and in IRC several times, but could not find a satisfactory solution to breaking down properties that way. Someone, at one point, offered a rather complicated function – but I kept feeling this was a too common of an issue to be overlooked like that. Something more fundemental was off in the way I was working with Org, but I didn’t know what it was. So I decided to “go back” and reflect again on how Org was meant to work originally.

After re-reading some sections of the manual and watching Carsten Dominik’s presentation back from 2008, I was reminded of Org-modes original built-in tools. In the lecture, Carsten emphasized Column View in Org. It is something I saw in passing previously, but now that I was having a mess on my hands with properties, Column View glowed in a welcoming aura. A quick and efficient way to have a table of the computers’ serial numbers (or users, or model…) right next to the header, in a comfortable layout? I needed this. I could set columns per header, if I wanted to, which meant more fiddling around with the specific laptop-setup task in my old method. Or… I could just have a set-up dedicated org file which will already have the column view for laptop-set up built in with the properties… Wait a minute.

Using Multiple Org Files

That was it. Everything I learned to this point came together in a torrent of thoughts. The solution to the properties problem was to have dedicated org files for each big project. After all, that’s exactly what the agenda was made for: to be used as the “glue” between them. This was why I was supposed to use agenda in the first place! As long as I had a task scheduled, it didn’t matter what file it was in. All I need to do is to tab in, just like I did with tasks.org. The idea of having a list of tasks in one file was so ingrained in my head from all the apps I used in the past, I was blind to see what was under my nose.

With the realization of needing to break tasks back to different files came the realization that I am probably squeezing too much data into one org file. After all, setting up laptops as a task is a project in itself. It should include a heading for each laptop, along with a checklist of steps I do for each, with a log describing different issues I am having in the process. From the agenda, this looks just like another project I do during the week. it fits inside the tasks list and on my phone just the same. ~ I was in for another pleasent surprise: In my agenda, since now I seperated the setups to a different file, my category problem suddenly disappeared as well. All my laptop setups were shown as “setups” as a category, because this is the file they are in. Wow. So this is what happens when you stop fighting something and start using it the way it was meant to be used.

Next Steps

I’ve been using the system above for almost a week now (this post has been a week in the making). Here are some ideas about what’s next.

  1. adoptive capture template for tickets this was done this morning! I now have a capture template that automatically prompts me for the properties needed for each ticket I need to work on. It is then filed as a ToDo task in my weekly “tasks.org”
  2. Should Figure out what to use tags for: work in progress. While categories have their place, tags are more fluid. For now, it seems like I’m gravitating toward creating “mind keywords” of certain topics or terms that I’m familiar with. These, in turn, should be good for searches since I think in these terms when I’m looking for something. For example, a task flagged with my boss’s name tells me this is a task he’s viewing actively, or a “wiki” tag tells me there’s some good info stored in the notes of the tasks that I should probably store for later.
  3. Learn to trust the system. It’s hard to let myself create tasks in different files. I still need my weekly “mind-dump” of a place where I throw in quick captures and tasks that are not big enough for their own file, but I should stop thinking of it as my weekly list of things I’m doing, since it’s misleading. The agenda is what reflects that now.

Some Good Things

Last Saturday I had an encounter with dear ol’ Aunt Dee. She’s like a lifelong buddy of mine, never too far. She’s a shy quiet wall flower, sneaks up on you, undetected. You only realize she’s around after her long scrawny hands are wrapped around you, hugging, her perfume brings back a daze of nostalgia…

I was playing another round of Cities Skylines, and realized I’m not much into the game anymore. As a matter of fact, I was forcing it for the last hour. I promised myself to go to gym, and a part of me was trying to use any possible excuse to keep me glued to that chair. It was gray and cold, it was a weekend after a hard week, I wasn’t in the mood, I didn’t eat enough, bla bla bla. I recognized this train of thoughts: “ah! I know you… you depression you, how have you been?” and checked my list. I think anyone who’s been with dear aunt Dee for long enough has a similar list.

Sunday was different because I started it differently. The first thing I did was going back to my elements. This means, without getting all fancy about my life philosophy, that I did my morning exercise (few stretches, pull-ups, little weights) and meditation. I took my time before I set at the computer: first there was one round of exercise, then I made coffee, then a second round of exercise. Then there was meditation, which was a powerful mind-dump. I talked to myself about things that are important to me, things that I’ve neglected. Then the meditation itself, a series of deep breaths while counting. This is one of my life facts: when meditation is really needed, there’s always strong resistance to it at start. By the end I don’t want to finish it. Putting my mind on a state of neutral after emphasizing to myself things I care about is powerful. I’m far from being a meditation master and I don’t read much on the matter, but this form of meditation always came naturally to me.

After that, Sunday started taking shape. I know so because my room started looking like my room again. First (it’s always first) the bed. Then laundry. Then the closet, which was something I haven’t dealt with in a while. Cleaning my room is similar to cleaning my head. It’s odd, a made bed makes me feel so much better. I don’t know why, but I don’t argue, it works. Later on I went to gym again, this time without a long struggle. Later still, I enjoyed sharing a different video game, The Return of the Obra Dinn, with my partner.

This time I didn’t game as an exercise in solitude and forgetfulness. It was a mutual experience of puzzle solving. In retro respect, it reminds me of my grandparents playing bridge together, only with more high fives and laughs. There’s something special about sharing something as intimate as a video game with someone who’s been around long enough to share such an intimate thing with. Those of you who game know what I’m talking about: getting someone into your bed is not as nearly as intimate as having someone by your side while you play a video game, let along take an active part of it. My partners are the only people alive who can grab the mouse or the phone out of my hand mid-action and not encounter the scorning disdain on my face.

Somewhere between the meditation, the game, the avocado salad/guacamole I’ve made, I found peace. Aunt Dee was gone, out of town, see-you-next-time. I’ve had a good day at work yesterday (not excellent, not awesome, just good). It’s important that I mention it for myself, for next time, and for you, the other nephews of aunt Dee out there (or her relatives) who might find comfort in this.

Have a good rest of the week!

Desktop Wisdom

I asked my coworkers the following question: “What is the one truth, in a sentence, you’ve learned from being in a help desk position”? Here are the answers.

Chris (age:25, years in the job:3): “There are a lot, and I mean a lot, stupid people in the world.”

Nick (age:28, years in the job:10): “You got to have a lot of patience”

Dave (age:63, years in the job:30): “You’re only as good as the last person you’ve helped”

Myself (years in the job:3): “If it was really to ‘just work’, we’d be out of a job”

When looking at the nature of the job and the experience each one of us had, these facts make a lot of sense.

Chris is the youngest, very bright, and after money. He’s tired of users forgetting to check the power cable or not knowing how to reset their computers before calling. Nick is a chill guy who never seems to get upset. He is used to the boss demanding things and he exercises every day, which definitely help to take out aggravations. Dave has seen it all in this job and been through a couple of management changes. His other favorite sentence is “tomorrow is another day.”

The Hub

Having a consisted, stable server for my org files has been on my mind for a while. I bought a Raspberry PI (RP) to serve as a file server to be used as a “hub” that will always be on and host these files. This is a description of the setup of what I’ve done to make this work. I hope you’d find this useful! Feel free to contact me with any questions.

Buying the Raspberry Pi.

For the setup I describe here, you need the RP itself, a power source, an SD card, a USB case for 2.5 inch HDD, and the HDD itself. You’ll also need an Ethernet cord for this setup. If you don’t have one, go grab yourself a CAT6 cord Amazon Basics cable, it’s always good to have one around.

For the RP, you better off getting a package similar to this one. You could probably get something slightly cheaper by getting all the parts yourself, but not by much. Better to have everything you need at once without worrying about forgetting something. I didn’t get the above package with the SD card, so I had to wait a couple of days for another order to come in.

The HDD I had laying around already, so I got this case for it. 8 bucks.

Setting up the RP

First, we’re going to set up the SD card to run our RP.

Before we jump in, a note about my setup: I have Linux in a VM inside a Windows host. As it turns out, having Windows an alt-tab away (or in my case, ctrl+alt+left arrow away) can be very useful.

If switch between Windows programs and Linux commands without mentioning it, that’s because this setup comes naturally to me at this point. I’ve been working this way (office and home) for almost four years, minusaa half a year of dual-booting with Linux-Mint as my daily OS. There’s a whole story here about the switch to Linux and back, but that’s for another time.

OK! Enough chatter. Let’s get on with it.

  1. Download Raspbian (zip file).
  2. Format the SD to FAT32
  3. Unzip, and write the image to the SD using Win32 diskimager or Underbootin. For me, the first option is better when I deal with SD cards and my USB-SD adapter, YMMV.
  4. Create an empty file (no extension) on the SD root called SSH.

That last part is important. Creating the file (in windows, just create a new text file, rename to SSH and delete the extension) is what tells our RP to open port 22 for SSH.

Then you insert the SD card and power the RP up. Log into your router to find the RP’s IP address. This is why you need to Ethernet connection for. You won’t be able to access the RP with WiFi for this.

Go ahead and SSH into the RP as pi: ssh pi@[RP IP address here]. your default password is raspberry.

Once in, you want to sudo into RP config tool: sudo raspi-config. The setup tool is straightforward enough, but I found that remoting into the pi itself with VNC (coming up) is more convenient, so the only option we really care about now is to turn VNC on: go to interface options (option 5 in my case, in the middle), and do that.

Save, get out, and then download RealVNC. I downloaded it for Linux because that’s what I use to SSH and setup my Pi. It’s also where later I will setup Syncthing (coming up), so it made sense. RealVNC is the recommended option for RPs. it’s easy to install and run, and supports VNCing with a password, which is what you need for the RP. As a bonus, it automatically creates nice thumbnails of the last logins (IP addresses) which can be useful.

With RealVNC, go ahead and connect to the RP using the same IP address and default password. The Raspbian will jump on you right away telling you need to change your default password and a bunch of other things. and that’s all good. Follow these screens, and after updates are downloaded, don’t restart just yet.

You might have noticed you’re working in a tiny little screen. Let’s change that quickly by going to the preferences (click on the RP icon at the top left), then Raspberry Pi Configurations. Under resolution, choose something that works well for you. for my ultra wide screen, 1920x1080 was good enough to work in. Now you can restart. If you changed your password (you should have!) leave RealVNC on and it will inform you the password is wrong (because you changed it). All good, log in.

A note about passwords and password generators

If you use a password manager (my favorite on Linux now is xpass, LastPass is another), I strongly recommend to copy-paste your password in plaintext somewhere first, then copy it to the RP password-change field, and click the eye icon to reveal it. Make sure what you pasted is the same thing you think you pasted. In my case, I messed this up twice and had to image my SD card all over again. Yes, there’s a technique to reset up the password by booting into the root shell, but this didn’t work for me. And besides, it’s annoying and requires to physically connect a screen and keyboard to the RP.

Because my RP is set up for SSH with a password for now, I made a random 32 character password for it. copy-paste is not a choice.

Encrypting the HDD

I chose to use a separate HDD for a couple of reasons:

  1. I can take it with me if I need to and mount to another machine
  2. I have unused HDDs laying around from computer repairs. In my case, the HDD is 500GB, more than enough for text files and occasional pictures I use in Org-mode.
  3. If the HDD dies, I still have a functional RP with the OS on the SD card.

The encryption tool of choice for me ended up being LUKS, the same thing that encrypts Linux machines at startup. Actually, my first choice was VeryCrypt. It comes with a convenient GUI and better options to handle containers. As it turns out, there’s no updated version for ARM (which is what the RP is), and I’d have to compile from source. Possible, but not worth it for now.

VNC into your RP.

  1. First, Install cryptsetup on the RP (apt-get). This is the LUKS suite.
  2. Figure out the USB-connected drive: sudo fdisk -l. this will show you where’s your hdd in /dev. In my case, it was in /dev/sda (not sda1, etc.)
  3. Format with LUKS: cryptsetup luksFormat /dev/[drive] the default option was fine for me, but be aware that LUKS comes with many options if you so desire. When formatting, it should ask for a passphrase, so give it one. Then it should start doing its thing.
  4. When done, verify your drive is encrypted with cryptsetup luksDump /dev/[drive]. You should see information about the encryption, along with the first key (LUKS can hold up to eight keys) and its hash. This means we’re all good so far.
  5. Now, create a mount point for your drive. for me, out of habit of using Mint, that was /media/pimount. So: sudo mkdir /media/pimount.
  6. This is where things get a bit hazy since I’m writing this from memory. We need to open the encrypted drive with cryptsetup luksOpen /dev/mapper/[drive name] [mounting point name]. Notice we have mapper in the path after dev now. that’s how LUKS work. The encryption is not being done on /dev/[drive] itself. I’m not sure why (I’m sure there’s some Linux file system reason behind it), so feel free to educate me.
  7. Now, we need to format the drive with the file system of our choice. In my case, ext4. I believe this can also be done for FAT32 or even NTFS if you want to access your drive from Windows, but I haven’t tested that. To format, you can download Gparted, which is good to have in general. However, this is easy to do with mkfs.ext4 /dev/[name]. In fact, that’s exactly what Gparted would do for you anyway.
  8. Mount the freshly formatted drive: mount /dev/mapper/[name] /media/[name] If we have “show mounted drives” option on the desktop in RP, it should show now. As a matter of fact, it’s a good idea to enable this (right click on the RP desktop).

Before you start putting stuff on your HDD, it’s a good idea to restart and practice mounting the drive again (step 6 and 8; you don’t need to format again!). Then do it again. The unmount command is umount (no n), or you can simply restart the Pi. There’s a way to automate this using fstab, but messing your fstab could screw up your RP as it happened to me. I figured I never plan on really turning off the thing, so I will just keep doing it manually. Now, if someone happens to just grab your HDD and walk away, it will need to be mounter and decrypted.

Syncthing setup

To me, Syncthing is the best solution for all my file-syncing needs. It works without a hitch until you need to change something tiny and meaningless. Then all hell breaks loose. I don’t understand why that is, but I learned to accept it as a fact of life at this point.

During this RP weekend setup, I had to change my Syncthing setup so that the RP will become the central hub. What happened was a fruitless night of trying to figure out why Syncthing stopped syncing and an messy Monday at work. It was such a bad setback, in fact, that I decided to create a process dubbed “downfall” for future cases just like this one when I can’t use Org-mode at work. Yes, not having Syncthing basically means I can’t use Org. But I digress… The important lesson here is this:

don’t mess with Syncthing. Set it up, make sure it works, and don’t touch it.

As intimidating as this sounds, Syncthing actually has a pretty simple setup process. Here’s the plan:

static/hub.png

The different colors represent the different folders in syncthing. To me these are personal, work, utilities (this is the only folder that does not have org files), and archive.

I use four devices, three daily, one (laptop) once in a while. The home VM, which is my main productivity box, gets everything. So does the laptop, it’s meant to be used as my home away from home. My work VM (OpenSuse Linux) only contains work org files and the archive. On the phone, I skip media-heavy folders like DnD and some classical music flac files to save space.

The setup process:

  1. Install Syncthing on RP (apt-get).
  2. Choose the first device to sync. Add the RP’s ID to its Syncthing.
  3. Wait for RP to acknowledge the device. Approve.
  4. From the connecting device, share the folders you want to sync with the RP.
  5. Wait for RP to acknowledge folders. Approve each, choose path.
  6. Verify the sync work by altering the files a bit.
  7. Choose the second device to sync… repeat step 3-5.

Now, folders that common to a couple of devices should be added first from one device (as stated above) with the RP, and then add that folder to the other device. Device_A –> RP –> Device_B. This ensures you’re sharing the same folder (same folder ID) and not creating duplicates. The reason you’re doing it this way and not just sharing from one device to another is because you don’t want the other devices to see each other; they should only “see” the RP. The RP is the only device that needs to see all of them. Fewer complications means Syncthing works better and faster. Remember what I said earlier: don’t play with it, keep it simple and it will work. Complicate it, and you’ll spend hours trying to figure out what’s wrong.

So for example, if I first sync my home VM and share the “work” folder to the RP, after I’m done and verify the sync work, I am connecting the work VM to the RP, wait until it’s recognized, and then send the work folder (which is now synced between home VM and RP) to the work VM.

AFPS Secure Token Active Directory Problems

Most chances are, you won’t even hear about Secure Token unless you happen to manage Macs for your company. Secure Token seems to be only an issue with Active Directory.

Because Apple doesn’t talk much about Secure Token, I suggest you read about it on Der Flounder. There’s a pretty good explanation of what it does and why.

My story begins with the delivery of an iMac. It usually takes me 1.5 hours to prep a Mac from start to finish since we don’t have a working image yet; that’s a whole issue in itself. The prepped iMac gets to the user bound to the domain so all they have to do is log in. I then connect them to their user’s share drive (as a startup item so they always have it) and see if they need access to any printers that we don’t have on the server, There is usually at least one.

I have to log in first (as the iMac’s admin) to unlock FileVault. So far, so good. Then, using the fast switching menu, I get back to the login screen and ask the user to log in using their domain credentials. On AFPS, like the one here, I get the Security Token window. Mo problem. I put in the admin credentials, the user is approved, and after adding said printers and share drive, we restart to make sure everything is in working order.

Last week though, it didn’t.

The user that I just authenticated was not on the login screen. My admin account was there, my test account was there, his account… nope. This was odd since I knew we just went through authenticated with the secure token. I logged into my admin account and check the security settings since I know it will tell me if anyone is not allowed to log in because of FileVault… but nothing. I ran fdesetup list, which shows the list of approved FileVault users, and sure enough, he was there.

In the next couple of days, I’ve had a couple of users log into this iMac to test. I also had the user log into a different iMac, one that I set up a week previously using the exact same procedure. Everyone could sign in and stay approved (they’d show up after approved with the secure token and restart) on the iMac besides this one person. On the other iMac, we ran into the same issue with this user. The problem then was with the account.

Because I saw the user log in and out so many times and checked the users list, I noticed something odd: the user’s name was omitted. There was nothing at the top right corner (fast switching) and in the list of users, the icon for a user showed up without a name. I’ve never seen anything like that before on a mac.

Back at my desk I looked into the account and couldn’t find anything odd at first. I even copied the account to create another test user… and then I saw it: the Display Name field for that user was blank for some reason. The Display Name field in AD is populated automatically when a new user is created, but not here.

Today I managed to do final testing. I repeated the same test: signed in with admin, gave my test user with the blank Display Name field access with secure token, and restarted. Sure enough, the test user could not be seen on the login screen. Only after my admin account was signed in the test user was able to sign in as well.

As soon as I populated the Display Name field and logged in again, everything worked. The name showed up, and the secure token remained.

The conclusion: macOS pulls the user’s name field from “Display Name” in AD. And it needs this name for the secure token. Without it, it doesn’t work. the secure token is tied to the display name of the user, not the user account itself… or at least, not completely.

About Removing Company Software

When someone asks what we install on their personal devices (BYOD laptops), I usually give them the abridged version. People rarely care about details. Every now and then though, someone is a bit concerned and asks for more information. Stuff like why do we need software that tells us details about their hardware, how do we use it, and of course, if we can view their personal data. My answer to that last question is “we can if we want to.” It bothers me this question comes only from so few individuals.

A few days ago, A client gave me her under-performing BYOD laptop for general cleanup before switching it back for strictly personal use. She knew about the different bits of software we install because she asked me when she first brought it in. She was curious and worried about her privacy and she got the full story from me. I have no doubt that the conversation was motivating enough for her to come back for a cleanup – and for the right reasons. I dare hope that she sought me out specifically to clean her laptop because she trusted me a bit more after that conversation.

How many people leave us with their personal laptop without ever bothering checking in with us to make sure our software is removed? A few of them will take the time to wipe their laptops, but so many more, the majority for sure, will never bother.

Among our most intrusive software is an Anti Virus client that dials home to our management environment and an inventory management software that gives us full details about the hardware and software installed on the device. We also effectively have root privileges on BYOD devices, basically a backdoor on their system.

There’s nothing unique about what we do. The same practices are followed by virtually every respectful IT company that deals with BYODs. Nothing we do have sinister intentions, and I trust everyone on my team not to abuse root privileges on BYOD devices which are often one of the most personal things a person owns. But that doesn’t mean our clients should just trust us blindly.

From LastPass to KeePass

I’ve been using LastPass for the last 5 years and been happy with it. I recommended it to friends, family, and co-workers. I tried to sell it through its convenience: once set up, LastPass auto-fills user and password fields, and can even log you into a website directly. LastPass also creates complicated passwords automatically and is available on every major browser, iPhones and Android.

But it seems like even LastPass’s time has come.

As we know, Convenience usually comes at the price of security. LastPass auto-fill is quick and effective but also makes it very easy for someone else to grab your laptop, find your bank website in your history, and log int with your saved credentials. To resolve this issue, LastPass has a couple of built-in options such as logging you out after a certain amount of time or logging off when the browser is closed. There features need to be activated on each new installation of LastPass.

So LastPass was an obvious choice for my mom’s new chromebook. I thought I’d set her up with a new account and share passwords with her directly. I wanted her to learn to trust the app and starts creating new secure passwords instead of using the same two or three she’s been using for years. But instead, I discovered a problem. there were problems.

The option to log off automatically if chrome is closed was ignored. I’ve checked and asked other users on Reddit, but all I got is the generic troubleshooting advice to make sure Chrome completely exists for the auto logoff to work. Exiting Chrome is possible on Windows, Mac, and Linux (for which this guide was written) but, as it turned out, is not possible on a Chromebook. I summoned the ChromeOS task manager with shift+esc or more recently search+esc (this is different than the Chrome browser’s task manager, which is accessible from inside Chrome) and saw that Chrome was still running even after I exited the app. I couldn’t force chrome to quit either: The button to do so was grayed out when I had Chrome highlighted on the list.

That meant that the only thing protecting someone from accessing all your passwords is your Google Password with lockscreen enabled. Perhaps i’m paranoid, but for me, that’s not nearly enough. I disabled the extension and asked myself these two questions:

  1. Is it worth using LastPass over Chrome’s built in password manger?
  2. Is LastPass really a good option to securely save passwords?

The the first answer is “not really.” If you’re a LastPass power user, who have the application on your phone and use a family plan (which allows you to share passwords), then yes, LastPass gives you more features. However, Chrome’s Password Manager now allows you to create secure passwords and sync them with your Google Profile, which means you will have access to those anywhere you log in including your Phone. Since on a Chromebook your security is already handled by Google, there’s not much sense to start a new account with LastPass, which basically does the same thing.

The second question is harder to answer. LastPass is a company that makes its business to secure passwords all day every day, and they have a great product. They are, overall, pretty transparent with their security breaches when they happen and apply patches and fixes very fast.

However, LastPass’ browser extension is also its weakest point. To be fair, the same can be said for any password manager that has an extension built into the browser. Various vulnerabilities have been listed before and some were listed by LastPass themselves. If you’re really concerned about the security of your passwords, you should not use a browser extension. However,if I am hard-pressed between choosing Chrome’s built in password solution and a third party’s solution that is built into Chrome, I will go with Chrome’s built-in solution because it’s native to the application and hence (hopefully) more secure.

But. The real answer here is that you shouldn’t use a browser extension at all. And that’s what I do these days.

My favorite solution is to use good ol’ KeyPass, which has been around for about 15 years. I like KeyPass for a couple of reasons:

  1. It’s a standalone program with a simple GUI and flexibility. It works and looks better than LastPass’s more complicated controls and does not relay on cookies.
  2. The only person with my passwords is me, which makes me sleep better at night. This has been my general trend since I started using Linux. It’s not about privacy and less about security, a proud feeling of owning my on data, something I feel we don’t do enough these days.
  3. KeyPass is old, open sourced, free, and probably not going anywhere. I’d like to say the same thing about LastPass, but companies such as these are constantly get eaten by greedy corporation that inject them with crap like social network integration and ads.
  4. With it’s combination of using key files and different ciphers (at least via plugins), it feels solid and secure. Not that LastPass security is not good enough. It should also be mentioned that LastPass has two-factor authentication.

Because KeyPass doesn’t have a browser extension (at least not out of the box), I use xdotool to auto-type passwords into websites’ text fields. The workflow: I click the user field on a website, Alt+Tab back to KeyPass, hit the auto-type shortcut, and watch KeyPass putting in my credentials as if I’m typing them from memory. Because I can customize the auto-input macro (KeyPass2 and up), eventually this makes it even more reliable than LastPass’ auto-fill feature, which sometimes doesn’t work well with fancy animated menus.

LastPass is another tool I didn’t think about replacing when I transitioned into Linux, and for a long time, I kept using it in Linux as well. When I switched away from Chrome and stopped being logged into Google all the time, Chrome’s extensions stayed behind. Like many other products (Gmail, Google Docs, Dropbox…) I’m slowly but surely finding good open source options which are often better.

Using Regex for Old macOS Models

At work, we are a small team serving a very large gorup of clients. Because of that, automation is very importnat. It is impossible to get to each client directly, and we constantly have our plates full with other projects, walk-ins and general maintanance.

One project that was recently brought up was detecting and replacing old company-owned Macs that are out of warranty and cannot be upgraded to Apple’s newest macOS, which at the time of this writing is macOS Mojavi (10.14). the idea is to locate these machines and retire them. To do such projects, we use system managment tool, KACE. KACE used to belong to Dell, and is still mostly used for Windows machines (it is most usefuls on Dells, obviously) but in this line of work it’s many times using the tools you already have.

Identifying macOS 10.14 minimum requirements

Here’s Apple’s official compatible model list, which contains models by their release date in a buyer-friendly format, such as “MacBook Air introduced in mid 2012 or later.” The problem is that this imibgious description doesn’t work with systme management tools, which use the specific model identifiers (as it should). Given the popularity of Macs, however, it’s not hard to find a list of the model identifiers matched with Apple’s official list. Here’s EveryMac.com’s compatible model identifer list.

Using the above list, we can use this full list from the same site to fine the first model identifier that will not work for macOS10.14. This model would be the newest model on our “non-compatible list”. We need to find each model of the different major Mac families.

Loading it into KACE:

KACE lists model identifiers under “System Model”. It the case here, we are looking at a MacBook Pro 14,2 (which is mid 2017, according to the list above). This is the key we will use to create our filter:

/mnt/hgfs/Space/hallway/finding-non-macos14-compatible-macs2.png

One of KACE’s killer features is its smart lables, which allows us to build very detailed filters to profile hardware. Smart labels comb through the entire inventory and capture devices that fit our desired filter. There is a basic wizard built into KACE which allows us to create basic labels, but if we need to dive in deeper we need to use Regex or straight up SQL.

To create the smart label, I used this helpful article. I suspect KACE’s Regex is nothing special, but this was my first time trying to get my hands dirty with it. It was fun! Here’s what it looks like:

/mnt/hgfs/Space/hallway/finding-non-macos14-compatible-macs1.png

As we can see, the filter uses conditions for eahc one of the six family models. Let’s take a look at ^(MacBookAir[1-4],) as an example, keep in mind how the System Model (above) looks like.

  1. Our complete Regex statement needs to be included inside a parenthesis
  2. Going back, we use ^ as a starting point. It declares that this is the start of the string, and nothing should come before it.
  3. We spell out MacBookAir because that’s just the text string of the model.
  4. Next, square brackets call a range of numbers; in this case, 1 through 4. This goes back to the minimum requirement mentioned previously. Using our lists above, we know that the lowest compatible MacBook Air model for 10.14 is MacBook Air 5,1. This means MacBook Air 4,2 is the first non-compatible Mac we need to capture. This is why any Model Identifiers that include 4,2 and below (4,1 3,2 3,1 etc.)
  5. the comman after the number range is just a text string, as we have it in the System Model field in KACE.

Creating the report

KACE has the ability to create an automatic report for machiens that answer specific creteria. The reports also come with a built in filter and a wizard, even though not as customizable as the one in smart labels.

It’s generally a good idea to create a report that is based on a smart label because the smart label can be applied to other actions in KACE. For example, if we’d like to later send an alert to the users of these old Macs, telling them to come to IT with their Macs for inspection. Smart Labels also work immidetly with KACE’s databse, where reports are built only from new results, meaning we need to wait for Macs to connect and update on KACE before we see results. If someone has a work Mac they generally keep at home and don’t connect to our network, we will wait a long time before we know about it.

in KACE reporting tools, there’s an option to base the report directly on a Smart Label, which is what we’ll do here. I won’t cover the report in this post, but it’s important to mention that while the smart label is good to capture the information, the report is much better at presenting it. Use the report to specifiy things like the user’s full name, last login to KACE, IP address, etc. The report can also be created as a CSV or HTML file among others, and can be emailed.

Org Update

It’s been a while since I blogged about my Org activities. Overall, not a lot has changed from my latest setup. Setting up a new site on GitHub with Hugo kept me fairly busy, away from hacking away at Emacs.

static/OrgUpdate_1.png

Lighter Agenda: No Sub-Tasks

So, looking at my agenda sometimes less is more. I don’t rush to include every single detail in my agenda anymore, especially at work, becuase it’s redundant. We have a ticketing system that we have to use, and the details are constantly updated by different team members. Because of that, updating technical details for myself again under each heading is not that appealing.

But I do still include notes. Just differently. I start notes with an inactive timestamp and describe where I left off. In other words, exactly what the subtask used to do. So I don’t bother with many sub-tasks anymore. The benefit of one heading is better visual organization. I get a nice Logbook which includes all the time I spent on the project, my notes are more streamlined and easy to find. It also clears the agenda from the clutter of subtasks that sometimes don’t make sense to me anymore because I forgot the big picture already. I also grew more comfortable with changing the header wordings of the task to better reflect what is the big picture I just mentioned. Since I keep short notes of where I left off with the most recent one at the top, it’s easy to see what’s going on.

static/OrgUpdate_2.png

Sometimes I would even copy-paste from the ticket straight into Org just to remind myself what was done; other times I include references to other people and cases; yet other times, if I have to vent about something, there’s a link to my journal (more on that later).

A fun sidenote: Orgzly, my companion Andoird App, now has the ability to start with Org headings collapsed. This means that when I’m on the go, I can open specifically the task on hand and view my notes which I often update just before I get up to do something.

A little test here. And here.

Keeping Track Of Time Without Going Nuts

The Logbook contains estimated times. I almost never get to close a clock on a task once I started, because I almost never get the chance to keep working on one thing before I am called away, or have to do a prerequisite. Other times, I simply forget to start the clock. I estimate how much time I worked on something when I’m back at my desk. Now, since I don’t bother with sub-tasks and separate clocking times for these, it means I need to expend my agenda to view my logs (l in agenda view). This allows me to see the time I spent working on a certain task, even though it is marked as done later, sometimes even days (or weeks) later. This way I know when the whole task was finished, and I can see when (and for how long) I was working on it.

All of that said, most of my tasks don’t get logged with a clock. That’s because many of them are simple tasks or just interruptions when I am asked a question or something similar. I should overall though get in the habit of logging tasks in retro-respect to see where my time went.

Another area I stopped worrying as much about are which org files in my archive my headers go to when I refile. This was a mouthful so let me explain.

My system is based on a weekly review. Every week has its own .org file. In the past, filenames used to specify a date range like 08102018_15102018.org, for example. This caused me complications because I seldom had the chance to summarize and finish my week Sunday and start a clean slate Monday. Many times I wanted to conclude an .org file on a Friday after work or didn’t get the chance to do so until Monday or even Tuesday. Besides, this whole week range thing is redundant because you can always list files by dates anyway (to see when it was last modified).

On the other hand, since I started using my journal again, which is based on weeks rather than months, it made more sense to name the files after the number of the weeks. For instance, this week is w41.org, last week was w40.org, etc. My journal tells me what week I’m on if I ever need to reflect, the file name is very clear and obvious and there’s no room for confusion. My weekly summaries will always be based on their respective <number>.org, even if I summarize a month later.

In the past, I refiled events and tasks depending on what week they were to take place. If I was planning a vacation four weeks in advance, for example, I would open a new file for that week and place the event there. This lead to all kinds of problems when I worked on projects that stretched over more than just a weekly period. Let’s say I take a vacation from Friday to Monday of next week. Now what, which file do I choose? At the time, I opted for the completion date. But, this too wasn’t simple. Because certain projects (especially at work) could last even months, or re-surface from the past. That would mean I would have to move them to the appropriate week and think about what “done” really means. One day thinking about it out loud, I just started laughing at myself: “dude,” I said, as I like to refer anyone for some reason, “what are you doing? Org takes care of all of that for you automatically! That’s what’s the Agenda is for!” Who cares if I placed a project in week 20 or week 30, I don’t ever search it that way or know where it comes from anyway, I just see it on the agenda or search for it with the agenda anyway (C-a s). That’s what it’s for. So I stopped caring about that, and events now naturally fall into the week I created them.

The Journal Revisited

I don’t use the journal as much as I used to when I had it earlier this year. I find that I use it capture experiences and emotions more than technical notes. For the later, I now have an org file called “wiki” which I keep organized as it grows further. The journal is good to keep moods and mental “patterns” I can look into on my weekly review. For this, I also use tags. This is a good exercise that allows me to recognize mental “traps” I get into more often than not.

The journal works nicely with my weekly reviews, which are essentially weekly videos I make reading back from my agenda and journal. With time, these became more of a personal “summary of summaries” where I highlight my week tasks and review, usually for a 10-minute segment or so. I keep these in a well-compressed mp4 format on an SD card (FFMPEG is awesome for that) and I can easily use one SSD for an entire year and still have additional room. I am now starting to name the videos based on the same weekly theme that I use for my weekly agenda files and journal.

Evolvement of Video Journal & Org

Over time, my videos definitely got smaller. This is because I got used to Org in different areas of recording my thoughts. I discussed these couple of times before: tasks, task notes, and journal.

My tasks do rarely contain sub-tasks anymore. This is odd because sub-tasking was one of the reasons that got me into Org-mode initially. Over time I found that I rather leave notes to myself about where I left off and what needs to be done instead of using actual tasks.

Task notes are usually brief (one paragraph with 3 to 5 lines) and are always timestamped with most recent on top. This quickly lets me know where I stopped a task and why. Interestingly, I use it also to indicate general mundane errands, like grocery shopping. In such cases, I can also include a checklist.

The journal is a different story. This is where I let myself spill the beans. I took a long break from using a journal because I felt there’s so much recording done on a keyboard, and I wanted a quicker way to record my thoughts.

But recording my thoughts was inconvenient. I had to take a break from my workflow, record a video, name it, compress it, and save it. It required that I’ll find a quiet corner - nearly impossible to do during my day - to record for a few minutes. So out of necessity, I started to include more text in my task notes. Eventually, I felt they become too long, too personal, and just too heavy. I needed a separation and going back to my journal felt natural.

I find that typing is just slow enough to make me process what I’m thinking. I can edit what I’m trying to say, which means I can rethink of a better way of describing it. To prevent myself from going on and on, I journal on specific events only, never an entire day. My capture template takes the link of my event and makes it the title, and then continue to discuss only that event.

Yesterday, I noticed something interesting: with time, my rambling on videos was reduced from going on an on for an hour plus or so (and multiple videos) to shorter segments. Here’s a visual:

static/videojournalevolved_1.png

Above: over time, my clips become shorter

Finally, last weekend, I reached the conclusion that my written journal is just better at keeping track of my experiences. Since I re-created the way I save my achieve files now, it also means the links from the journal org file to the org and the events is never broken: I just have to make sure to refile an event from my “oh snap” thought-dump org into the current week org file, and I’m good.

I still record a short “weekly summary” on weekends, and now I find I’m actually looking forward to it. These are now 10-20 minutes long videos in which I briefly go through my agenda and logged events and explain what happened while my memories of this week are still fresh. I then give the week a “theme”.

This works as a way for me to remember what happened far in the future if I want to reflect on my experiences but not looking for something specific, or not sure what it is. It’s also a good way to reflect on the major events of the week after I had a weekend to slow down and process. It works.

Emacs Windows

In his 5th Emacs podcast, Rakhim discusses the difficulties of windows management in Emacs. I agree with him. Emacs’ Windows are a pain. It was probably one of the longest pet peeves I had with the program, and it wasn’t until this podcast that I realized that I’m much better off than I used to be.

I don’t use any extension that manages windows (unless you consider Ivy’s switch buffer, which he uses too. Ivy is awesome). My method is based on bits and pieces I picked up. Here are a couple of lines from my settings.org, which I use to make Emacs’ windows work better for me.

First, for my ultra-wide screen at home, it helps to enlarge the default frame size. After a few tests, I found this size satisfying:

(setq default-frame-alist '((width . 90) (height . 50) (menu-bar-lines . 6)))

Then, we need to turn on the mouse vertical mouse divider. This will allow us to use the mouse to drag and adjust windows in our frame vertically as well. I actually don’t use this as much anymore (the reason is coming up) but this is a basic feature that should be on by default, in my opinion:

(window-divider-mode +1)

Another basic feature, which I now use everywhere, is the visual line mode. If you write more than you code on Emacs (which is true in my case) this mode just makes sense.

(global-visual-line-mode +1)

Those are all good and nice, but here’s the biggest tip which I stumbled upon a few months back, from Ergoemacs: just don’t use windows. Use frames. Seriously, it’s that simple. There are probably many Emacs experts out there with their windows functions and extensions and what not, but let’s not reinvent the wheel here. If there’s one thing a GUI is good for is to manage windows (or frames in Emacs). They are very easy to move with the mouse, they snap to each other, easy to resize. Besides, Emacs itself splits the frames to Windows often enough. I find that I use C-0 and C-1 very often to get rid of windows I don’t need, and I can have them back quickly by switching back to the buffer with Ivy.

One of the things that used to drive my bananas when I started using Org was how the agenda and its habit to kill my windows setup. One of the most helpful lines in my settings.org is the following:

(setq org-agenda-window-setup (quote other-frame))

This saved me from going insane. Since I start up my agenda every time I start up my Emacs, this shortcut also effectively creates the other frame for me to work with until I exit Emacs. From there, I can just use the Agenda frame itself to switch to another buffer if I want to. Agenda is also the only place where I do use windows often - when I tab into one of my headers there. I tab into a task, view it, make changes if I want, save, and C-0 to return to full agenda view. It’s so fast it’s just my muscle memory now.

My Org Capture Templates - Part 1

emsenn asked Org-mode users for their day-to-day capture templates in the technology Mastodon (which you should join and follow if you haven’t yet). I was happy to reply and figured it would be intesting to dig into my capture templates. I started writing this post explaining all of my templates, but then realized I’m starting to edit out details because the post is getting too long. So instead, I’m going to explain each template at a time. Hopefully you find this interesting!

First, Here’s the code for the capture templates I’m going to cover:

("i" "INC" entry (file+headline "~/Documents/Archive/OhSnap!.org" "Incidents")
"** TODO %? \n  %^{Ticket}p %^{PIN}p %^{Computer}p %^{Location}p \n")
("j" "Journal" entry (file+datetree "~/Documents/Personal/journal.org")
"**** %U%?%a \n" :tree-type week)
("t" "ToDo" entry (file+headline "~/Documents/Archive/OhSnap!.org" "Tasks")
"** TODO %? \n SCHEDULED: %^T \n")
("e" "Event" entry (file+headline "~/Documents/Archive/OhSnap!.org" "Event")
"** %? \n %^T \n")

Going into details below, I broke the code down so it makes sense to people starting out with Org-mode in hope this would help new-commers like I was not too long ago. If these steps do not sound right (especially for those of you who know Emacs lisp better than me, which is probably 99% of you…) please let me know so I can correct and learn. Thanks!

Ok, here we go.

Part 1 - The INC Template

("i" "INC" entry 
  (file+headline "~/Documents/Archive/OhSnap!.org" "Incidents")

"** TODO %? \n  
  %^{Ticket}p %^{PIN}p %^{Computer}p %^{Location}p \n
")
  1. For key-press “i” initiate “INC” template as follows:
  2. Create a headline “Incidents” in file ~/Documents/Archive/#OhSnap!.org
  3. …As a second level header. Create a TODO keyword, space, place cursor here, new line.
  4. Create Property “Ticket” and ask for input.
  5. Create Property “PIN” and ask for input
  6. Create Property “Computer” and ask for input
  7. Create Property “Location” and ask for input
  8. Space, New line
  9. Finish

I use the above template for ticket creation at work. The file “OhSnap.org” is my “dumping grounds” for everything before I sort it out- usually in front of a desktop. This is useful to me because many times I’m running around using my phone with Orgzly, a very minimal version of Org-mode. I click the add note widget, speak or write out my note in a few words, and save. Later I add details. (By the way, going on a tangent here: you can use Google Assistant on your phone with Orgzly like any other note taking app: “OK Google, make note: buy eggs today” - and bam, you have a quick header in your org file.)

The properties help me keep my work-related issues organized. I have the ticket number for the issue, which is also the fastest way to find it in our ticketing system. Then I have the person’s ID, to quickly identify the person’s name and email from the directory if I need to. The computer property is the hostname, so I can remote in or know where it is. Location is helpful because we span over different buildings and even parts of the city. In turn, this also makes Column view beneficial if I want to quickly see all the tickets formatted in a nice table. I don’t use it often, but it’s there.

I don’t always have all the properties filled in (though I do try to at least have a ticket and a person’s name entered), but I do use this template for anything work related. This is because I have a “work” category property inside the Incident tree, which quickly shows me my work related things on my agenda. This means I can also quickly filter into just work-related stuff if I need to, say, when I show something to my co-workers.

Let me expand on that last at the risk of going into a somewhat unrelated tangent: one of the huge things about Org-mode efficiency in my case is the ability to mix work and personal items in the same place. I have different apps, difference accounts, and different devices, and Org-mode is the first place where I’m comfortable mixing all of them into one agenda because I still have individual org files. This very helpful to keep me organized.

This concludes part 1. As you can see I went into other areas that are not directly related to the template itself. Let me know what you think, and if you want me to expand further! Thanks for reading.

My Org Capture Templates - Part 2

I was happy to see part one of this series generated interest on Reddit. One of the things I love about Org-mode (and Emacs) is this passion among its user and the thirst to learn more from each other. I believe one of the major reasons for that is Org-mode’s complexity: there are so many options, it has to be personalized by its users’ needs. Org-mode is esaily the most personal tool I’ve used in a while. Every time I read something Org-mode related somewhere else it’s as if I’m invited over for a discussion over a cup of coffee.

Speaking of personal, today I’m going to discuss my Journal capture template. I hope you enjoy, and as always, comments are welcome on Reddit (follow me using the links above) at least until I integrate a commenting system on this site.

Part 2 - The Journal template

First, here’s the code:

("j" "Journal" entry 
  (file+datetree "~/Documents/Personal/journal.org")
  
  "**** %U%?%a \n" :tree-type week)
  1. For key-press “j” initiate “Journal” template, which as follows:
  2. Go to ~/Documents/Personal/journal.org and create a dateree entry there
  3. Create a sub header (level 4) under the day header, under the week header, under the year header (a function of datetree, see link above).
  4. Enter an inactive timestamp, place cursor right after it, place a link to file you were just in.
  5. New line
  6. Make this a weekly datetree (the default is a monthly datetree)
  7. Finish

I prefer a weekly format in my journal over a monthly format. In fact, the week is a “productivity unit” which always made sense to me, and this mindset integrates beautifully into Org-mode. In my archive folder, which you saw in part 1, each week has its own .org file (from 1 to 50 something). I don’t really need this to make Org-mode work, but there’s something assuring in viewing the folder and seeing all the week files.

The reason the timestamp and the link are condensed together (no space) is because I delete most of the timestamp and just leave the hour. This way the cursor is where I need to start erasing the timestamp from. I haven’t found a way yet to create an hourly timestamp only and I don’t want to forget to put in the hour, so this is the best solution for now. Why is the hour important? First, personal preference from the past, where I would write the time and the location for each entry. Second, this helps me see a couple of entries in the journal that are about the same event (the title is just the name of the event on my agenda) so that if a certain event is developing and I’ve journaled about it a couple of times, I can easily see the hour of each entry as a separator.

Finally, I use a link to connect me back to w[##].org file (week number of the year, as I mentioned above) from which I created the journal entry. In my weekly org files, the incidents or events themselves contain very little details, usually only what is my next action (next thing to do) and a logbook drawer containing the times I worked on a task if I keep track. If I want to discuss something, I create a journal entry for it. I always prefer to leave my thoughts and conclusions in a personal space, separate from the more generic org file which also contains work-related material. The link takes me back to that event or incident, and because the name is the same as the event, I know exactly what I’m referring to.

static/Org-capture-template-2.1.png

My journal is another feature of Org-mode that fits my workflow effortlessly. The notion that I keep entries around events rather than just a daily or a semi-daily habit works well and behooves me to write my thoughts down often. At the same time, the journal file is kept in a personal folder that is not synced to my work VM. I use TRAMP inside my work VM’s capture template, so when I want to capture something in my journal from work, I’m prompted to log into my file server with a password, without saving anything at work.

This privacy barrier may sound cumbersome, but once working, I almost never have to tweak with it. This kind of privacy and separation of my personal files from the cloud help me sleep better at night.

My Org Capture Templates - Part 3

I took a long unplanned break from writing about my Templates because of the CSS changes I worked on and the complications with Magit. I’m happy to say these are now behind me, and that I gained another grain of confidence using Magit and knowing git, but this is a post for yet another time.

Since the last two templates are rather short, I’m putting them both here together. Here we go:

Part 3 - The Tasks Template

("t" "ToDo" entry 
  (file+headline "~/Documents/Archive/OhSnap!.org" "Tasks")

    "** TODO %? \n SCHEDULED: %^T \n"
)
  1. For key-press “t” initiate “ToDo” template, which is as follows:
  2. Go to ~/Documents/Archive/OhSnap!.org and create a headline “tasks” there.
  3. Create a second level header, space, add a TODO keyword, space, Place cursor here. New line.
  4. Add the text ”SCHEDULED:”, space, and ask for a complete timestamp (date and time).
  5. New line
  6. Finish

To create a regular “todo” item, I want to have it scheduled right away so it’s on my agenda. The word SCHEDULED: is what initiates the scheduling in Org-mode. On my agenda, I clearly see scheduled tasks, and I try to schedule everything I want to do. Scheduling a task then is my actual “To Do” trigger, not the TODO keyword.

Of course, things often get shifted around. I constantly reschedule things at work. The benefit of scheduling tasks this way is that tasks I haven’t done are highlighted for the next day, so I know to revisit it. That’s all there is to this template, it’s rather simple.

Part 4 - The Event Template

The Event template is probably the first one I created. It changed around quite a bit. This is my go-to personal stuff template which works hand in hand with my journal. Here it is:

("e" "Event" entry 
  (file+headline "~/Documents/Archive/OhSnap!.org" "Event")
       "** %? \n %^T \n"
)
  1. For key-press “e” initiate “event” template as follows:
  2. Create a headline “Event” in file "~/Documents/Archive/OhSnap!.org"
  3. …As a second level header. Place cursor here. New line.
  4. Prompt for a complete active timestamp. New line.
  5. Finish

Even simpler than the previous template, this one asks for a complete timestamp (date and time) without a keyword. The active timestamp alone means that this event will show on my agenda, but without the scheduling, it’s not highlighted as a task (in my theme, tasks are in green and events are in gray). I use these “concrete” events for meeting with other people, or something that is happening with other people that are important to me. Events usually have an ending time, even if it’s speculative. I go back in later to adjust the timeframe.

I usually follow the event title with a @ sign and link to the location on Google maps if I can. This creates a nice way for me to reflect back later in my journal, which links to these events. For example, an event could be ”<2019-01-13 Sun 19:00-21:00> dinner with Marcy @ The Green Inn” and The Green Inn would be a link to the location. A quick C-c o and the browser opens to the right location, and I can look up directions. The location links also work from Orgzly on my phone, though I’m looking into integrating this whole thing with Google Calendar, which should work better.

After an event took place, I place the cursor on it in my agenda and initiate the journal capture template, as I explained above. Something I started doing lately is to store the link of the journal entry while there (C-c C-l), and then enter a [J] at the end of the title of the event on the agenda, linking to the journal entry. If this works well, I will look into creating a macro or a function to create this journal link automatically. Shouldn’t be too hard.

Hugo - what takes so long?

If you’ve bseen following me online for the last month or so (especially on reddit) you’d know I’ve been engaged in shifting away from WordPress and into the world Hugo.

I’d imagine the people of in r/emacs would raise an eyebrow at the term “world of Hugo.” After all, Hugo is a relatively simple program, not a deep rabbit-hole like Emacs. Yet, it’s Hugo that got me overwhelemed, not so much Emacs. What? Really?

You see, it has to do with background and expectations. As a non-programmer (as in, someone who never wrote anything a bit more complicated then a few lines of shell script) Emacs was a mountain. Standing at the bottom, I gazed at the cloud shrowing its peek and told myself “well, you gotta take a first step somewhere…” so I did. For me, that was Org-mode. As a matter of fact, at the time, I didn’t even know much about Emacs and how deep it can get. All I knew was that Org was cool, and I’m interested in learning more.

After a couple of months, I got a bit more comfortable with Emacs and my level of doing things with it. I am still miles away from the top, and I’m fine with that. I got Emacs to do most of what I want it to do for me at this point, which is writing these posts, my journal entries, and of course my agenda and tasks both at work and at home.

Hugo, on the other hand, was meant to replace WordPress. As a person who used WordPress on and off a couple of years, I expected more or less the same thing. You know, going to my webiste online somewhere, log in with a username and password, navigate the GUI and post stuff, add plugins… As such, I didn’t care for a change that much. WordPress was working more or less OK, so why change to something similar and learn things all over again?

I get frequent alerts that my website is down from my webhost, but I got used to shrug these off. I got it as a cheap deal, and for about $6 a month or so hosting, what do you expect? People in the Emacs reddit mentioned they see more spam from my site’s URL instead of my posts, but again, with 1 person complaining out of 10 or so, it wasn’t a big deal. After all, SSL and https is for professional website that can afford it, and I am just an amatuer-ish blogger. Perhaps the biggest hurdle was Github: I knew people who use Hugo usually use Github to publish their blog, but they were all programmers. Me, I didn’t know anything about Github or git. That was for coders, people who write scripts for a living. This is not me. I am not a porgrammer.

To be honest, I’m not sure what changed that perspective. Perhaps it was a random tutorial I saw about Git which made me realize it’s not that crazy complicated. Perhaps it was the fact that someone advised me not to link to my website again if I don’t have https. Maybe it was just my inner geek, itching for a change, and WordPress was getting too familiar and too boring. Whatever it was, I took the bait, and I started doing all of it at once.

First I watched some Lynda videos about git and read a few posts. Then, I learned more about Hugo. After that, it was ox-hugo’s turn (because I gotta write my posts from inside Emacs), and then it was Magit (because I gotta use git from inside Emacs). I think I went through everything in a matter of about a month. I learned too much too fast. But that’s how I roll. I don’t know why I do that to myself, and I’m not sure how it makes sense to go from “git is for coders” to “ALL THE THINGS,” but I do anyway.

Obviously, this attitude has terrible consequences. You learn everything on a very shallow level, which means the first tiny bump in the road sends you launching out of control. You get frustrated and you try again, just to hit another bump. Turns out, if I learn everything I also expect to know everything out of myself, which is of course nonsense. But not everything is futile. Knowing the big picture in advance is not a bad idea. Learning a couple of things at once make you realize how they work together, and helps you develop a mindset that is more skilled at solving specific problems. Later, when you go look at the official documents, certain things already make sense to you even though you haven’t seen them before.

Be it as it may, it’s not easy. I wouldn’t recommend this method to anyone. it’s doing damage control instead of learning. However, manuals never made sese to me. In the rare occessions where I do have the patience to read through the introduction, I’d forget what I wanted to do (or how to do it) by the time I get the real stuff. Besides, manuals are usually written by the people who made the program; as such, they are written from the inside, for insiders. Certain terms and syntax make aboslutely no sense to newcomers, who find that they struggle with basic terms that the veterans spit out as if they were born into it. Indeed, if there’s one thing that I kept running into when learning Emacs is not to bother with its documentation. As helpful as it is, and as much as everyone loves to say all the help you ever want is C-h v (or a or whatever) away, it was not true for me. It is much, much easier to Google something up and find a blog post (or a YouTube video) that explains it to you in plain English.

Now I’m much further ahead than where I was when I started my Hugo ordeal. I have sucessfully launched a couple of testing websites on and off Github. I have whined, complained, and whined again to anyone who would listen.

So, here we are with Hugo. I use git, and well, the site is on Github. This post was written with ox-hugo. I used Magit, but because there are so many issues I’m running into, I decided to make a decision not to use it for at least another month. It’s very hard to hold myself back, for sure. On one hand, since I’ve done WordPress for so long, I’m aching to go back to my old website where I don’t have problems learning how to create a new static page (that doesn’t show on the front page of this blog), or how exactly images work with these posts, or how can I automate the long process of saving a post and updating my Hugo website compared to the simple “click to publish” in WordPress. I’m sure some of you are aching to tell me just how to do some of these things. Don’t worry, I’ll get back to you.

Oh, and speaking of helpful people: a big thank you to Kaushal Modi, who kept answering my endless questions, who was there through my frustration, anywhere I went. This guy is passionate and compassionate about what he does.

Switching to Hugo - Full Time

PayPal informed me that the host of my WordPress blog is charging me for another $80 or so for the next year. The very next day, tumblr decided to shoot itself in the head (rather than the leg, if you ask me). It was good to be reminded to not trust cloud services for creative content - like my blog.

The Issues With my Host

My current host sucks. My WordPress blog continuously goes offline, so much so that I had to mute the alerts I got because they overwhelmed my inbox. My connection issues never got resolved.

Another major issue with that website was the lack of https (this is not my host fault, actually, it’s just yet another thing you need to buy) and the constant spam I got on the website. People would go to my link and find a totally different website trying to sell them something. That issue was not resolved either.

There were other technical issues, from WordPress not functioning right due to database corruptions (the host had an automatic set up process), to my domain not being resolved correctly. In the past, such issues would die out in the first month or two when I switch to a new host – but not this time. This time, as soon as I got the notification of renewal from PayPal, I immediately asked for cancellation.

More Work to be Done

There’s a lot more work to do on this website. I am still not happy with it, and I believe my WordPress blog overall looked better still. But it is up to me to make this place better with my own set of growing skills. CSS, ox-Hugo, Go, and of course, the quality (and quantity) what I write matter.

I’m also not entirely happy with the blog hosted on GitHub, which is now owned by Microsoft. It’s no difference than Tumblr being bought out by Yahoo. I need to move to an actual host with a more liberal point of view about sexual content. Unfortunately, in this world of ours, this means I’m looking for a host that is OK with Pornography. They have to include everything in one blanket statement so it’s easy to sort out in courts, in case someone gets offended. But I’m not going to go down that hole right now, there’s a whole lot of it, more than enough, on the internet right now with people freaking out about Tumblr.

So what’s the solution…?

I believe AWS (Amazon hosting service) might be it for now. I find it hard to believe they’d bother chasing me if I mention a sexual encounter or show a picture of a breast. Still, even AWS is not a safe haven. Hmmm.

How I Write Posts for my Blog

Even though it doesn’t show on this blog (yet), I’ve been blogging since I got into college, more than a decade ago. Blogging is an activity that combines a couple of passions for me: writing, technology. and visual creativity. My process hasn’t changed much over the years, and the tools I taught myself to use help me maintaining my routine. This is a quick explanation of its main components.

Using the Journal

I usually create an event on my agenda and then write a journal entry about that specific event. This could mean several entries a day for different events, or on the other hand, only a few entries a month. This event-to-journal approach is helpful because it prevents me from writing too much at once and it keeps me focused.

Pictures are another thing. I’ve created a script that automatically resizes and labels photos from my phone to streamline the process. Pictures are useful on two levels: first, it’s a personal visual to decorate what I’m writing and makes it more memorable; second, the geolocation in the photo and the name of the file serves as a way for me to know where I’ve been specifically and at what time, very useful for a quick reality “bookmark” so I can refer to it later when I write. For example, I will take a picture of pies in a pie shop, and later I will title the post “2018-12-24, 16:04 @ Big Pie Shoppe,” and the place name would be a link to Google Maps showing me that location.

Because the journal contains personal things which are usually not immediately shareable or worth blogging about, I have a basic rule of thumb: I blog about experiences that happened at least a month ago. This allows some emotional distance from something I wrote about too impulsively in my journal and might want to reconsider. It also allows me to think of the event in question more like a story as I reflect on it. which makes the post more interesting to read. The idea is that if I have a good story to share, it can wait a month or even more and still be good.

Making an Outline

For me, making a basic outline is crucial. The outline is also a good tool to keep the post in a digestible length, something in the neighborhood of 500-700 words. An Outline is nothing more than a bunch of bullet points which I expand on as I write. I do write longer posts, but these are usually more of the technical, instruction-like texts.

Another good thing about outlining is that it allows me to finish a post before I run out of “creative juice.” I write for about 40 minutes (give or take), and when I feel I’m done, it’s easy to get up and do something else. I return later to my draft to shape it into something worth reading, look for typos and grammar issues, and get it ready for posting.

static/drafting_a_post.png

My Tools: Hugo and Org-mode

I started looking into Hugo about three months ago as a replacement for WordPress. For non-programmers and people who are new to Git and GitHub (like I was), Hugo is not easy to pick up. I can’t recommend Hugo to folks who just want to start writing; it’s just too damn complicated. For me, as a techie and as a person on the lookout for a low-cost, reliable and secure solution, Hugo is excellent to learn and use. My website hosting is completely free, and the writing is integrated seamlessly into my journal, my calendar, and my tasks. It works well, but it was not easy to get there.

Org-mode is a huge topic in itself which I’ve spent hours writing about in other places. Speaking from a blogging perspective, I use a “bridging” piece called Ox-hugo which translates what I write in Org-mode to Hugo. If you have absolutely no idea what I’m talking about, here’s a way to think of it: Hugo is the publishing platform (think of it as WordPress), and Org-mode is the writing tool (think of it as - shiver - Word or Google Docs). Ox-hugo is what copy-pastes from Word/Google Docs to WordPress, complete with links, formatting, headers, etc.

These tools keep my writing easy and natural. What I write on my blog looks the same as what I write in my journal which looks the same as what’s on my agenda. To me, that’s one of the things that allows me to write much more frequently than ever before in a distraction-free environment. The complete opposite of MS Word, if you ask me.

CSS Updates

About some CSS Updates… and why I haven’t posted in the last two weeks or so. (Changes are now complete!)

Last week I started making some changes for my current Hugo theme, Hyde. This was after I tried out the Hermit theme and found out the hard way that my site is already “committed” to Hyde. Some of my customizations did not work well in Hermit and required more tweaking, while other parts of the theme didn’t look as good to me as they did on first impression.

So instead of changing a theme I started tweaking my current one and got myself lost between the branches of Git. Going in, I knew the changes I’m working on need more time than I could commit to in one sitting. Creating another branch seemed to be the way to go. Things got out of hand when I realized I never started a new branch. I was working on my master branch, which was now out of sync with the one on GitHub.

If you find the above confusing – that’s fine. The problem was that I was in the same boat as you are now, scratching my head and going “huh?” And the more I struggled, the bigger mess I’ve made.

Thankfully, folks like Kaushalmodi exist. I don’t know where I would be without his help. Not on a Hugo site using Git, that’s for sure. I’ve learned a couple of things from him and the experience this time:

Magit can be very helpful to get back on track. That’s because it gives you sort of a road map to your project. You cay see the branches and then go up and down the list using the arrow keys to choose the right location. For the first time, I saw the benefit of doing that because of the mess I was in.

a whole menu of options is a question mark away, and everything has a hotkey. Magit was something I stayed away from in the past because I’m still scared of Git. After my last experience tough, I should keep using Magit for my work on this site. Magit is more effective and visual, and to me, the later is crucial.

I will babble some more about my tweaks and CSS as I work on things. I will keep updates in this category (Blogging) so you can follow up, or ignore them.

Changes now merged (2019-02-06)

I completed some initial changes to the CSS and merged with back with my main site version.

Those of you who understand git won’t think much of it: all I did was marge a branch. Those of you who don’t understand git won’t think of it at all.

To me however, this is quite a big deal. I have successfully finished my first merge, in Magit none-the-less, after implementing changes on another branch.

Changes implemented

Most of the complaints I got about my current layout is the width of the sidebar to the left. It takes a lot of space, and for people who like to work with small screens, this was very annoying. The other issues I found was that this theme has three layout versions (we can call them extra wide, wide, and narrow). They were also indicated in ems and rems in the CSS, which I didn’t like.

When it comes to screen width, I prefer to use pixels. So the site now has two versions: more than 800px and less than 800px. I will tweak with these some more (I think 600 as a max is better). The sidebar was tweaked to be narrower, as well as the site’s title font.

Next, I think I will turn my attention to the navigation links themselves (under Topics) and make them more obvious with pure CSS; I have no intention of uploading fancy images. I also want to break the empty white at the right of the content in a way that doesn’t make it all look so empty. We will see what happens there.

If anything looks off to you, please let me know in the comments! Thank you.

Apple is a Hardware Second Company

You don’t need to look hard to find the holes in this Emperor’s clothes. Apple has been heading downhill for a couple of years, but it’s only recently that the public at large started to take notice.

Two good articles last week, one by Brian Barett in Wired and the other by Kevin Roose in the NYT. Both explain the issue behind Apple’s disappointing sales are closer to home than what Steve Cook would like us to think: consumers don’t rush in to buy expensive new iPhones when their older ones are almost just as capable and work fine. ” ‘It used to be that for $650, you got all new features, a better screen, everything,” ’ says one tech analyst in the Times’ article. Now the phone costs at least $100 more (and the cost goes up easily since you need to buy parts and upgrades) for not much new.

There’s more going on than just smarter customers and humdrum technology from Apple, and you don’t need to go digging to find the issue. Tim Cook is basically saying it himself: Apple is a services company now. Tech pundits including famed John Gruber agree.

But While Cook and Gruber believe in the “services first” future for Apple, the public is starting to digest the “hardware is second” present. Apple’s hardware is mediocre at best, easily compare able with other leading tech company like Microsoft, Google, and Samsung. And, like these companies, Apple’s products are prone to defects that are highlighted by Cook’s more apologetic (and, I believe, increasingly frightened) reactions. The era of “You’re holding it wrong” is over.

Apple used to be untouchable in its innovative software and hardware quality. Today, the remains of the glorious past is a hefty price tag.

Of Subnautica and Fear

static/subnautica_jrss.jpg

When it was time to build a base on my second attempt at Subnautica, I picked a location near the Blood Kelp Zone. Its cliff walls spread in front of me into the endless blue, and deep down I could see the pale bloodvines reaching up toward me like the claws of a forgotten demon. As if by cue, scary dramatic music started playing, and my PDA’s AI announced that the zone “matches 7 of the 9 preconditions for stimulating terror in humans.” I was thankful the game creators didn’t include the two other ones, whatever these may be.

The reason I chose this location for my home was because of fear. Subnautica is a game about fear, and it was teaching me to face it one step at a time. The first time I played, the game had the element of surprise. I remember my first reaper: it came out of nowhere and grabbed my Seamoth like a plaything. I yelped, slammed the Alt+F4 keys, and stomped out of my room white like the hallway wall I was leaning against, mumbling “oh my god” over and over. Now I know better. I know where they are, I can see them in the distance, and… I’m still scared. But I go ahead anyway. The fear is not pushing me away; it’s teaching me to be prepared. The only thing that’s really scary is fear itself.

Now, the base is furnished complete with a Moonpool for my Seamoth which I call “Discovery.” It is powered by a nuclear reactor I built from fractions retrieved from brave explorations. A single glass corridor connects my living area to a bubble-like observatory room which hovers directly over the dark abyss. There’s a chair in the middle of that observatory, so I can sit and read my PDA’s contents while staring fear in the face.

I’ve found something at the bottom of the abyss. “Something that shouldn’t be there.” It’s a dark, green-hellish looking place with bones of creatures the size of an apartment building. Each day I explore further. Each day I push further and the game never fails to scare me. Winning these small battles against myself bit by bit becomes addictive. I look back at what scared me before and I know I’ve conquered it. I know that now if one of these monsters chooses to attack my base, I will fight it. The base hanging over the cliff that once terrorized me is now my home, transformed into my new comfort zone. I know every fold in the ground, every floaters-covered rock, every hole to the mushroom cave. You can’t be scared of what you know. Subnautica is an excellent teacher of this lesson.

Password Managers Hacks

News that surfaced last week report of a security flaw that exists in the most popular password managers out there like LastPass, KeePass, and 1Password among others. Does this mean people should stop using them? No. Will people use this news as yet another excuse not to use password managers? Sure.

The Washington Post’s Geoffrey A. Fowler nails it:

I still think you should use a password manager. So do the ethical hackers with Independent Security Evaluators who came to me with news of the flaws — and other security pros I spoke to about the study, published Tuesday. You wouldn’t stop using a seat belt because it couldn’t protect you from every kind of vehicle accident. The same applies to password managers.

(emphasis at the end is mine)

I am no security expert, but it seems to me exploiting this vulnerability requires physical access. This means that someone would need to grab your computer while you’re not watching, dump your RAM into a file, and run away with it before you come back in hopes it captured the right password. This is a much lower risk than a bad guy finding your reused “super safe password” you use for everything somewhere on the dark web, log into your bank account, your email, and your phone carrier’s website, and convince your bank he’s really you. But don’t take it from me, take it from @SwiftOnSecurity.

A good point there, as well: keyloggers (or similar) are actually more likely than a memory dump because they come bundled with most RATs out there and other Trojans people download from the web every day.

But who’s going to listen anyway. I keep trying to get my partners on password managers, my family, and my friends. Almost no one buys in. My favorite argument of all time from an old classmate: “I feel safer remembering my passwords than some teenager hacker from Indonesia hacking into LastPass and stealing all my passwords.” Of which she has probably no more than 7.

Sigh.

Elite Dangerous: Beginning

17:42. Almost time to go home. I’ve been trying to hold on a bit longer, to tell myself I should around more before I buy it. Elite is complex, with a steep learning and no real objectives in sight. I was worried it’ll be more work after work. So of course, I bought it.

The first time I heard about Elite was on Twitter, from one of the famed infosec folks I follow. Then I heard about a bunch of players on an epic journey to the other side of the galaxy, especially when I realized this was a gamer-community driven event. This was also around the time I started watching Twitch streams more often, looking for a game that is more than just another hack and slash. Eventually, a few weeks ago, I started watching Elite streamers.

I had no idea how beautiful the game was. I didn’t prepare myself to watch streamers for long periods (something I didn’t know before), glued to the screen and asking questions. And then I found Pave.

Pave is a real-world USAF veteran who now makes a living streaming Elite 8 hours a night, every night. I know, because I’ve been watching him for a week, glued to the screen until my head started bobbing up and down and then up again as sleep took over.

Armed with his mug of Joe, a classic story-teller voice, and patience tougher than nails, Pave explained everything I asked him to the littlest detail. No question was too dumb or too long. Everything was explained. The more he explained, the more I learned just how vast and complex Elite is. Go ahead and try him: just ask him what joystick he uses, and why he mispronounces its name, or what (in his opinion) are the best controls to play the game. Pave has been playing the game since 2015, and it shows. Watching him maneuvering around a space station looking for the landing pad or zigzagging his way in a dogfight, the guy has skills that kept me coming back for more every day.

When a game creates streamers like Pave, you know something must be right about it. When a game has been in existence (in one shape or another) since 1984, you know you’re looking at a true classic. And when players form a group that goes on epic journeys to save other players, you know you got a game that brings the best out in people.

So you know. I thought it wouldn’t hurt to try it…

About Emotional Exhaustions

This past week has been a good example of emotional exhaustion. Because influences overall productivity, relationships, and health (both mental and physical) I’d like to discuss it here.

Defining What it is

I started this post looking for a good definition for a burnout, which is what I thought I was experiencing. Turns out, I was wrong. Like other terms that describe a relative and immeasurable state of emotions, burnout is a definition of a situation that gets overused and eventually dismissed as a dramatic conversation emphasis. After a short research, I found the following definition from Sensagent:

The most well-studied measurement of burnout in the literature is the Maslach Burnout Inventory. Maslach and her colleague Jackson first identified the construct “burnout” in the 1970s, and developed a measure that weighs the effects of emotional exhaustion and reduced sense of personal accomplishment. This indicator has become the standard tool for measuring burnout in research on the syndrome. The Maslach Burnout Inventory uses a three dimensional description of exhaustion, cynicism, and inefficacy. Some researchers and practitioners have argued for an “exhaustion only” model that sees that symptom as the hallmark of burnout.

This definition and the link above led me to more research which focused on additional causes of burnouts, most notably depression and a sense of lack of accomplishment and/or purpose. Because I’m familiar with these emotional states as well, I recognized these feelings were not present this week. I felt very tired and drained, but I did not feel sad or lacking a purpose. As a matter of fact, I felt focused (to the point of overdoing things, part of the reason for exhaustion) and very productive at work. I was pleased with my achievements this week and my interactions with colleagues were better than usual. I then settled on this definition, from Wikipedia:

Emotional exhaustion is a chronic state of physical and emotional depletion that results from excessive job demands and continuous stress. It describes a feeling of being emotionally overextended and exhausted by one’s work. It is manifested by both physical fatigue and a sense of feeling psychologically and emotionally “drained”.

We can conclude that emotional exhaustion is a symptom or a part of a burnout. Exhaustion can lead to a burnout, which has the additional effects that were earlier discussed (like a sense of lack of purpose, cynicism, depression, etc.) This is important because it serves as a way to contain the problem so other areas that do not require examination at this point are left out.

Why Discussing Exhaustion is Important

Attending certain activities require energy while others “charge” such energy. As a mostly introverted person who values quiet and privacy, social interactions require that I spend energy. Several activities on my personal calendar, such as attending a DnD session with friends, requires I spend a relatively high level of mental energy. Other activities, like working out at the gym, requires a different kind of mental energy and a level of physical energy. In sate of emotional exhaustion, both are depleted and prevent me from doing these activities. In turn, giving up on something that is highly important to me (going to the gym) leads to a feeling of guilt and self-blame which is another step toward overall burnout.

To make a long story short, exhaustion means you’re doing less overall and could lead to further physical and mental problems that in turn mean you’re even less productive.

Causes

Before we can talk about “fixing” exhaustion, we should probably discuss what is the source of the problem. In my case, the source of emotional exhaustion is work. This is very common, and again, may be overused and overstated. Therefore, I turn to concrete symptoms. A simple series of yes/no questions:

  1. Have I worked extra time?
  2. Did I have additional responsibilities at work than usual?
  3. Do I feel physically tired, to the point of napping or going to sleep extra early?
  4. Is there increasing need to be left alone and rest (recharging)?

For the week that passed, the answer for all of the above is yes.

Fixing the Issue

How do you “fix” your car when you run out of gas? How do you “fix” your phone when the battery is drained? The answer is indeed obvious. The problem is that recharging is not always an option.

It may be the weekend (finally) but there are other responsibilities and plans that are waiting since you did not have time to deal with these during the week. For me, it is also very possible to fall back on habits that waste time rather than recharge or recharge efficiently. that are not necessarily as recharging as they are just a cause of exhaustion. For example, playing video games until very late at night might feel like recharging, when in reality what is really needed is sleep and a physical/mental break from the computer screen. In turn, this also requires energy (discipline) which I may already be lacking because of the exhaustion. What do we do then?

  1. Divide activities into time segments: Instead of just ignoring the world and watching 5 episodes of a TV show, break it to watch only one before getting up, walking around a bit, evaluating things.
  2. Recognize true recharging activities: like sleep. Recognize that I need to be in bed at a certain time and finish with other activities.
  3. Use friends / significant others: if you’re lucky to live in with people who care about you, it’s easy to reach out for a point of reference and have them help you and prevent you from sinking into something else completely. Have some tea time together. Talk. Cuddle. Take a walk with them.
  4. Vent productively - find a way to let out your feelings in a way that feels productive and is worth your time. In my case, this post is that particular item.

Conclusion and Prevention

Sometimes, emotional exhaustion happens or can be triggered because of an environmental element that is not tied to your own habits. It is important to recognize that prevention is not as important as having a way to cope and work out of a situation when it does happen. Being prepared and having a plan to work with when needed is the most important thing.

Prevention may comes second but is also important. it’s important to know what’s coming and plan for it. This is one reason why I love Org-mode. Having a good tool is part of the job, using it is the other. For me, a good routine is to remote into my VM at work before I come in, turn on Org-mode, and sync my personal tasks. I then work from home looking at my day and evaluate what needs to be done. While this changes during the day, knowing what’s coming gives me a top-down view of what a successful day should look like. In turn, this prevents a sense of lack-of-purpose and directly prevents me from sliding into burnout, as mentioned earlier.

Another important prevention technique that I rediscovered is to disconnect several times during the workday. For me, a minimal break of 10 minutes is needed to “reset” and be able get out of the “tunnel vision” that I sometimes get while working. I work in a large enough environment that allows me to take walks inside the buildings regardless of the weather. I use the time to find a quiet corner and look outside and daydream a bit. In addition. the power of napping has amazing benefits. It’s not always possible to find a spot to nap, but if I do, a quick dose of even 5 minutes around 15:00 (this is usually when the “crash” comes) can be immensely helpful.

Finally, it’s important to leave some empty time on your calendar. Some folks pencil in every possible day with activities, including the weekend. I know, for instance, that using my Saturday for activities with partners and family means I won’t have Saturday to myself, so I give myself time Sunday. This is a luxury not many adults can afford, especially those with a family, But for me this is thankfully not an issue.

It’s always important to look back and reflect on emotional events that take place to learn from them and improve for the next time - good or bad. We are all learning creatures, and we can always “self-upgrade”. Personally, writing this post already helped me quite a bit. I hope this would help someone else reading it in the future.

What has Been Going on

I always dislike it when I read a post explaining why there’s been a break in posting. It’s usually a sign that the next post will be the last if it will happen at all, and it’s always some excuse about being busy, life getting in the way, etc. etc. So I guess now it’s my turn to write one of these posts.

I’ve been away for a week, traveling overseas to see family. The trip in itself was fine and not as eventful as I anticipated (which is a good thing). When I got back, I immediately bought two things: a HOTAS for Elite Dangerous, and a good headset with a DAC to experience music better. I am almost done writing a post about the later.

My mornings, which were once completely empty and dedicated to writing, now share time with going to the gym. As it turns out, it’s much easier to keep a gym routine in the morning.

Emacs and Org mode - not much changed there, as a matter of fact. I tried the Gcal for Emacs and ended up disappointed after I figured out how to set it up. Orgzly, on the other hand, keeps evolving and becomes a must-have tool for all org-mode users with Android devices in my opinion.

So there’s quite a bit to talk about and expand on. I hope to get back on track soon with a new post. In the meantime, feel free to say hello or read my older posts.

Upgrading my Audio Game

After a few months of research, I finally upgraded my Audio system on my PC. As it turns out, it wasn’t as expensive as I feared. The renewed interest in music took me in rewarding directions of sound exploration I did not anticipate.

First Steps: FLAC Files

As a teenager, I was first introduced to sound on a computer with Sound Blaster back in the 90s in the first Prince of Persia game. Until then I was not aware computers could produce better sounds than 8-bit beeps. Music, up to that point, was either somewhat of low quality on the radio or high quality through a stereo system which my dad owned at the time.

that experience made me believe that if I want to get serious about audio today, I can’t start my journey just by looking at better headsets. As one of my favorite bloggers BSIAG wrote to me in an email: “the best way is to start with improving the source, and work your way towards the output (speakers or headphones).”

The first step sounded easy enough: using lossless FLAC files instead of MP3s and other lossy-compressed formats that are usually available on popular streaming services. But, if the popular streaming services (like Spotify, Apple Music, and Google Music) do not provide lossless audio, how do you get the good stuff?

The Library. Armed with my card, I ordered a couple of CDs from the online catalog to be delivered to my local branch. I picked up a few recordings, and after quick research settled on ABCDE (that’s “A Better CD Encoder”. Catchy, eh?) to rip the CDs into FLAC format. It did a fine job, connecting to CDDB to find artists names and downloading album art as well. I repeated the process for a few classical CDs, exploring different sounds. I also tried a few CDs I already owned and learned an important lesson: many modern CDs are already created flat and prepared for loudness. In these cases, FLAC format did not do much to the sound quality. In other words, if you listen to Marlyn Manson’s beautiful people on a streaming service or a CD, there’s virtually no difference in the richness of the sound. That’s because the richness I’m speaking of is present in the variation of different volume levels of parts of the track. For example, the artist’s breathing, their finger sliding on the strings, certain bass sounds, etc. have different level volumes which make the track more whole and “multi-dimensional” where a processed recording often flattens it out, usually drowning the other sounds out completely. This is why, perhaps, some people swear that live recordings are much better; in-studio recordings are more proccessed and hence probably flatter.

I’m lucky to have a sound engineer in the family who loves and works with indie artists for the most part. When I told him about my quest toward better music, he pointed me at Bandcamp.

Bandcamp is a site that pays artists directly and much more than streaming services do. At the same time, it keeps digital works in FLAC format for download. You pay for a CD, usually as much as you want, and the Music is yours to save on your computer with the artist’s thanks and blessing. Bandcamp remembers what CDs you bought and slowly suggests new artists and people who follow them for more music. For the last month or so I found myself doing something I haven’t done in decades: buying music albums. My growing collection already includes 10 digital albums that make want to come back home and do nothing more than sit in my chair, close my eyes, listen and enjoy.

Equipment: DACs, AMPs, and Getting my Headset

Another thing BSIAG helped me with is to solidify the concept of a DAC (Digital to Aanalog Converter). Until that point, I knew there’s this “box” serious music lovers and sound engineers connect their ultra-expensive headset. As it turns out, there’s a whole world of technology behind DACs and AMPs.

I can’t go deep into it here because I have only basic understanding of it myself, but the general idea is that DACs convert the digital information (think 1s and 0s) in your computer to analog (wave-like, the stuff we can actually hear) information. All modern computers and phones come with a built-in DACs (which is why you can just plug in a pair of earphones directly into your iPhone, for example), but these are very basic. an AMP (or simply an amplifier) is a device that amplifies certain signals - in our case, sound signals. Why would you need all this volume pushing into your head? Isn’t it a bad idea?

Well, turns out high-quality headphones resist the electrical current more than cheap ones. I’m not sure about the exact science behind this, but it’s measured in impedance. Better (and usually more expensive) headphones = higher resistance (impedance) = more power (stronger AMP). There’s much more to this than I’m explaining here. take a look at this YouTube video that explains some of these concepts if you’re interested.

By the time I was ready to buy the DAC and the headphones, my partner got a job offer from a professional audio store in mid-town Manhattan, Audio46. I’ve heard good things about this place from the same family guy (har har) who told me about Bandcamp, and now that a visit to the store came with a lunch-date with my partner, there was no excuse not to check the place out. And I’m happy that I did. I don’t want to just blast them with praises (obviously I’m not objective) but a few things are important to note. First, they didn’t move a dollar beyond the budget I specified. These things are expensive, and as a person who’s only entering this world of sound, I’m probably a good target for folks to get me to buy expensive equipment I may or may not need. Not here. I gave my price range and they made sure to stay in that range. Second, I was able to choose and try on several headphones with my own music before committing to anything, and let me tell you, I took my time (at the end of the visit I was told some people can try different equipment for hours). Their patience and ability to gracefully work with other customers while still checking on me is a different experience than the kind you get in Best Buy, for sure. Third, knowledge. Some of the things I explained above were completely alien to me before I walked into this store. It wasn’t just a shopping experience, it was a learning experience as well.

I ended up buying the basic but popular FIIO Q1 for a DAC, which works on my phone just as well as it works on my computer, and a pair of the Beyerdynamic DT-880. Because I’m starting out, I wanted something that would also work on my phone when I’m traveling or want to listen to music on the go and not just at home. The guy at Audio46 gave me a pair of the 32 OHMs, which might not be as high-end as the 250 and the 600 OHMs, but I agree with the phone-also consideration. As I grow in my audiophile-ish ways, this pair (and probably the DAC) will become my mobile kit, while I’d get something more serious for home only that (I assume) will require more power than from a USB port alone. With this particular set I can’t pick up the difference between 320kbps mp3 and a lossless wav in this NPR test, but I picked up the difference between 128kbps and the rest every time. In Google Music though, I could hear a difference between my FLACs and the same files after uploading them to the service, which compresses them to 320kbps, right away. So maybe Google compresses harder than it lets you know. Whatever the case, I’m very satisfied with my growing lossless FLAC collection.

Rediscovering Music

After almost two weeks with my new equipment, I think the biggest self-upgrade I got out of it is rediscovering music that I like and appreciating the artists who made it. I’ve found artists that create amazing music on Bandcamp like Bruno Sanfilippo and Eamonn Watt. I discovered a new use for my reading corner, which now doubles as my listening corner. It’s very relaxing to close my eyes after a day at work in front of computers all day and just let the music flow in for a few minutes; It’s like taking a mental shower. This was an important upgrade, and I’m very happy I finally took this step.

Why Gcal Failed

I recently tried Org-Gcal a second time just to turn it off again. I was following the (now slightly outdated) instructions on Mike Zamansky’s blog and managed to get it working this time, but I found the end result messy and cumbersome. This is mostly because of how I use org and my agenda to quickly get a view of what I need, not really a reflection on the tool itself. It made me understand how different the methodology of Google calendar and Org-mode are.

First, what exactly is Org-Gcal? It’s an Emacs package that allows Org-mode to sync with Google calendar. Unlike other solutions, Org-Gcal works both ways, so you can create an event on Google calendar and have it show on your Org agenda, and vice versa.

As much as I enjoy Org-mode, it’s hard to eliminate Google calendar usage completely for a few reasons. First, Google calendar is a smart and mature product that is integrated into Android. Your Google Assistant, if you use it, can quickly create an event for you complete with location and invitations sent to contacts with a simple voice request. If you let it, the assistant can read what’s on your screen and suggest an event, build routines for you, and quickly save important dates and reminders. Further, as an owner of a Google Home, it’s nice to ask about your agenda for the day and be reminded of your meetings and tasks. Google Calendar is also a tool that quickly works with others, whether they use an Android device or an iPhone. an invitation via email is usually a seamless affair.

When I attempted Org-Gcal for the first time, I got stuck with my Google account authorization token. I use two different Google accounts on a regular basis, and I ended up requesting access on the wrong account. When I discovered that issue I retraced my steps but got stuck somewhere in the middle. I wasn’t able to get to the point of asking for another token from the other account, so I didn’t manage to authorize the right account, and I gave up for a bit.

The second time, something similar happened and I got stuck with a specific HTML error that ended up leading to that wrong token. This time, after digging around a bit, I found where Org-Gcal stores that token (in ~/.emacs.d/org-gcal/.org-gcal-token) and deleted it. This caused Org-Gcal to ask for a new token, which I was now able to provide it with. Since I was following the instructions mentioned above, the setup came with two hooks to cause automatic sync with Google calendar whenever I saved my agenda or org file. I soon started seeing my events live, and then I’ve experienced a couple of issues.

First, each event started repeating multiple times on the org file dedicated to syncing. This was because, I think, Org-Gcal works both ways and it pasted to events from Emacs to Google Calendar and then again from Google Calendar to Emacs. I noticed some events started repeating even more than twice.

Second, and this is a harder issue to fix, is the lack of details I got from events posted from Org-mode. On my agenda, my events have a title completed with a location and a link to that location completed with a tag for the person I’m with. For example, ”Breakfast at Tiffany @ Tiffany ::Tiffany::” conveniently shows me what I’m doing, where, and with who in one line on my agenda. It’s crystal clear, and with quick C-c C-o I can check the location, get direction, and later even have a journal link connected to it. When converting to Google calendar event, this would result in a long title with the link’s syntax square brackets, the location of the event left blank, and useless timestamp information in the details section. Of course, this is how it’s intended to works. While I didn’t expect miracles, this did make me realize that “translating org” to Google is not just a technical manner. it’s a different style entirely. It is just so much easier to squeeze my Pixel2 (which activates the Assistant if I don’t feel saying “hey Google”) and say “Create a calendar invite, Dinner at Tiffany at Tiffany Diner with Tiffany today at 8 PM”. Google Assistant will usually nail this on the first attempt. This is the Google style and this is what it’s really good at doing. If I need minor adjustments, like inviting Tiffany or correct the address, that’s pretty easy too with the Android’s contextual AI and suggestions. In short, things are more inviting this way than the straight sync the package offers.

images/why-gcal-failed.png

This style difference is a pretty big gap to cross. Org-mode is private, quiet, at-my-computer (usually) planning, where Google assistant is not private (Google can say whatever they want). The Calendar, in a weird way, is also not really personal: it’s usually intended to share with a different person. What I have on Org-mode is an event. An event can be personal, often enough summed up in my journal with my impressions. The difference is big enough philosophically speaking that I can create the same occurrence twice: once on my Google Calendar, and once in Org. Sure, this is mostly a waste of time. The difference is not always so black-and-white, but it’s there. For now, I am creating my Google events manually after creating the Org-mode events.

Switching to Manjaro

I’ve been using Linux as my daily driver for the last three years. I work with Windows on a daily basis and I support Mac issues all the time, but I don’t think I will ever go back to using either one for my personal data. As a matter of fact, it’s Linux which made an appearance at work: without my VM and Org-mode, I would probably be drowning under hips of emails, meeting invites, support tickets, and the constant in-person walk-ins into our work environment. I throw everything I get at Org-mode to keep on top of the couple of tasks I try to tackle each day.

It was always clear to me that I have things a bit backward. Linux Mint, which I use all the time, is trapped in a VM in a Windows-host which I only use for Games and entertainment (yes, I know you can play games on Linux. Trust me when I tell you I tried, and it’s not there yet. Not for me, anyway). I’ve had an old Dell Latitude 6430u laying around for a while as a backup and for travel. Big and bulky, it had Ubuntu installed, with much to be desired in the battery-power retention department.

A week ago, due to various issues with Microsoft Windows patching at work, I decided it’s time to try and leave Windows behind again and let Linux run free. It took me a while to realize that one of my big setbacks have been that I wanted to keep my work setup. I love my mechanical keyboard and the crisp, ultra-wide screen. I enjoy working at my desk behind a closed door, enjoying my music. Once I realized the main setback, the solution came easily: a KVM switch. After quick research, I found this TESmart 4K HDMI KVM Switch USB2.0 Port. So far, it works wonderfully.

As I was setting the laptop up, another unexpected change took place: I gave up on Linux Mint and decided to try Manjaro.

I say “gave up” because installing Linux Mint was never easy. For as long as I remember, on desktops and laptops, I always had difficulties booting Mint after install. It would install just fine, but when restarted, I ended up staring at a grub shell and couldn’t go further. I’ve attempted difference things, starting from switching between Legacy and UEFI boot to using Ubuntu’s boot repair tool, but nothing really worked. I’ve spent hours on IRC trying to get help and I grew tired of describing the same process again and again in various forums trying to figure out what went wrong. This is why my laptop had Ubuntu to begin with, instead of Mint which was always my preferred distro. I never liked Ubuntu’s “feel,” and that Amazon symbol at the middle of my dock (yes, I know you can turn it off), was sticking out at me like a rusty for-profit nail.

I tried to install Linux Mint on my laptop again and ran into the same boot issue. I asked for help on the forums again and told myself I should wait until I get the help I need and solve this issue once and for all. But I’ve been hearing good things about Manjaro, and my fingers were itching, so I threw it all to hell and decided to go for it. I don’t think I’m coming back.

The first obvious good thing about Manjaro is that it worked without any issue after install, just like Ubuntu. No headaches, no asking for help, no forums, no nothing. The second major good thing, turns out, is that I chose the Plasma KDE version.

Mint has always been Cinnamon (which is a version of GNOME) for me. While Mint offers Plasma as well, I never tried it on Mint. I did test Plasma briefly with Kubuntu at one point, but I wasn’t impressed. Manjaro, on the other hand, seems to be amazing with it. Many of the the things I’m quickly learning to love in Manjaro (but not all) are Plasma-related, such as KDE connect (syncs notifications and texts with my phone), “hot corners” (one of the things I liked about macOS), Yakuake for quick pull-down terminal needs, and in general, the sleek modern design of the dark theme. It just looks /good. So far, any application that existed on Mint (which is based on Ubuntu’s repositories) can work on Manjaro through its vast user/community libraries. Oddly enough, because Manjaro does not require additional repositories for additional software to be installed, the overall installation process for “Ubuntu/Mint applications” is even smoother than it was on Mint itself. Speaking of software… Manjaro has the latest version of almost everything, which means I am now running Emacs 26.2 instead of 25, the version I have on the Mint VM. I have a feeling Google Drive support will become useful for me soon.

I’m still working on the setup. For one, I need to re-wire everything and hide all the cords from sight. I also want to run an Ethernet cable from the laptop (having a bulky heavy laptop has advantages…) directly to the router for increased speeds and stable LAN connectivity for my SMB shares on my Windows machine, when I set them up. I also want to purchase another battery from eBay just in case. I’ve been running it without a battery connected directly to a power source. Currently, the CPU widget points at less than 25% usage (that is until I fire up Chrome…) and the laptop’s lid is closed, in its “docked” state on my desk. If I want to go somewhere all I have to do is to attach the battery and disconnect it from the wires, and I have my Linux productivity setup with me ready to go in my backpack. No need for a thin sexy under-powered over-priced device here, we’re doing juuuuust fine.

Everenote - Behind the Scenes

A few years back, I was an avid user of Evernote. That was a period of information “innocence” for me, before I knew much about Linux and switched careers to IT. Back then, Evernote was an amazing discovery. It was the first serious digital (and, to be honest, analog as well) system I built. I remember many nights where I drifted off to sleep thinking about how to tag and achieve my information in it. To this day I have plenty of memories stored away in its achieve files – though they have been since downloaded and converted from the App’s native format.

This morning, YouTube’s algorithm picked up a video from Evernote and threw it my way as a suggestion. It’s a short introduction to a promised upcoming series from Evernote, called Behind the Scenes. Introduced directly by Evernote’s current CEO, Ian Small, this seems to be an honest attempt to “take us behind the scenes… and show us what [Evernote] is working on.”

Evernote is a for-profit company with a product that is very different than org-mode. While I have no desire or need to switch from org in the near future, it would be interesting to see what kind of challenges the makers of a popular note-taking app come up with. Who knows, perhaps I could learn a thing or two.

Why Managing Email in Emacs is not for me

I had irreal’s post about mu4e for dummies on my todo list for a week before I decided to give it up. At first, I thought it was because setting the time and following through the guide was too much of a “cloud” in my head to tackle, so I broke it down. Then, as I looked at the individual components, I realized it’s more complicated then technicalities alone.

Org-mode has resolved an old problem for me I didn’t know I have. Or, more precisely, I didn’t know it was a problem because it was too big to grasp. It was the issue of identities and online accounts.

Emails are a good way to explain this issue. I currently have four active email accounts: my main Gmail one which is used for professional needs, financials, keeping in touch with family, etc. Then I have another Gmail account, which in the past was more specific for other endeavors such as creative writing and close friends. This became a more personal account which is now associated with my calendar. A third active account is my work email, in Outlook, which is probably the main account I’d like to use within Org-mode. I also have an account with proton mail, which I try to utilize more often with my GitHub and other techy things I do as I try to break away from the Google hold over my life.

You’re probably starting to see the issue here, and it doesn’t really have to do with Emacs or mu4e.

I didn’t instead to have four active email accounts, but this is how it is. While it’s possible that others have no more than two email accounts which might fall into nice buckets of “personal” and “work,” for me, these buckets happen within Org-mode. This is why it’s such a critical part of my organization. Life is chaos. My job bleeds into my personal life. My technological passions bleed into work. I need the barrier between emails and calendar for my Org-agenda, which contains everything I want to do. Once I decide to put something on my agenda, it doesn’t matter anymore where it came from. What matters is that it gets done. That’s all.

Best Tools for the job: Flameshot

Every now and then I discover (or re-discover) a tool in Linux that does exactly what I need it to do. Some examples include ffmpeg, to shrink down my self-recorded webcam journal session while increasing the volume; abcde, to rip a classical music CD to flac and find its exact title and name; youtube-dl, which downloads any video online I need along with aria to speed these downloads, and more.

Today I want to praise a new excellent tool, this time for screenshots: Flameshot.

static/2019-05-25.jpg

Flameshot is walking the thin line between being a heavy future-rich screen capture tool with all kinds of features (think and the minimal screen-capture one that usually comes bundled with your distro. I think it manages to be “just enough” to do exactly what it needs to do. Nothing more, nothing less.

It was easy enough to tie Flameshot to my print-screen button. When summoned, the entire screen becomes dark, and a little message asks me to drag my selection with the mouse. The resulting window, which you can see in the picture above, is a frame that can be adjusted by dragging its sides to crop exactly what you need.

As you stretch the frame, the different tools buttons re-arrange themselves depending on the frame’s size. It’s a really nice feature which makes the whole application feel smooth and responsive. The wider the frame is, the more tools on the bottom. If the frame is really small, the tools will wrap all around the frame, creating a second row, as needed.

Flameshot comes with all the basic tools you need from an annotation application: arrows, lines, and now (finally) a text tool, which needs to first be activated from the tools configuration menu: ($ flameshot config). There’s also a text-blur and upload to Imgur tool. Flameshot is minimal: there’s no eraser button, for example, because it’s not needed. You can only undo and re-do. The color palette is basic, without a large color wheel. You can’t rotate and flip images; it trusts you to use a more “heavy guns” editing tool for that, reminding you it’s there only for a screenshot.

It’s an elegant tool, with an intuitive UI: right-click to choose a color; scroll to make the lines wider or thinner. When you choose the upload to Imgur button, for example, it automatically does that in the background and opens a window for you to either save, copy the freshly-created link to the clipboard or delete the image from the site, which opens another tab in your browser for that. This is the kind of application that makes you feel proud using it.

Orgzly: An Interview

I’ve talked about Orgzly several times on this blog, but I haven’t dedicated a full post to it yet until now. Instead of describing my workflow again or just praising Orgzly’s usefulness in a repeated manner, I thought it would be interesting to reach out to its creator and ask a couple of questions instead. To my delight, he was happy to reply! I’m happy to present my first interview on this blog.

First, for those of you who are not familiar with Orgzly, a quick intro. Those of you who are, just skip the next paragraph.

Org-mode, as awesome as it is, has one glaring problem which keeps many users from using it all day: it’s inability to be mobile. Org-mode is built into Emacs, which in turn is built into Linux (or, with some alternations, into macOS). This means you can’t take Org-mode with you on the go. For Android, Org-mode’s official tool was MobileOrg, which is no longer active. While workarounds exist, it’s probably safe to say the only Android tool worth your time out there is Orgzly. As I mentioned several times before, it was actually Orgzly that got me into Org-mode and then into Emacs. I use Orgzly every day, all day. It is easily the main reason why Org-mode is even an option for me at work: it is what allows me to access my agenda and todo’s with all their details on my running-around routine.

With this out of the way, let’s turn to the creator of Orgzly: Neven.

Neven prefers not to talk too much about himself but agreed to tell us he’s a software engineer working for a company in the US. His experience comes mostly from working with Java. Below, my questions in italic, his answers in the texts under.

Orgzly seem to answer a very specific niche: Android Org-mode users on the go. Are these the people you’re trying to reach?

The project was born from the need to have org files on my Android phone. MobileOrg was the only app available I could find, but I had a hard time setting it up and getting used to it. Initially, I started writing a web app, in Rails. But too many little things bothered me, it felt hackish and clunky. So eventually, I started playing with Android. I never wrote anything for it before, but since I use it, it seemed like the best alternative. Orgzly literally grew from the “hello, world” app, while I was learning to program for Android. So my first goal was to have an easy-to-use Android app for org files, yet powerful enough to be able to do the majority of work in it.

Having Org-mode users as the app’s early adopters was very useful, as they know what they want and how they want it. It was also a motivator for me. But I had non-Org-mode users in mind right from the start. Even with huge competition in the field of task apps, I thought it would be useful for the quality of Orgzly to make it tempting for those users as well and see what suggestions they’d have on improving it. It’s easy to get stuck writing for the very specific type of mostly tech-savvy users and end up with a hard-to-use app. Orgzly was made to support more formats to store the notebooks in - not just org files - from day one.

As an Org-mode user, how does Orgzly help you with your personal workflow?

When I got a job years ago, I needed a way to track all the projects and the little tasks I was working on. I never had a real need for that before, so I started using a simple spreadsheet (I used OpenOffice, which was popular at the time). Occasionally I gave random software or productivity apps a go, looking to improve my system. Eventually I tried Org-mode (after about a year or so) and never looked back. At the time I was a Vim user, but because Org-mode became such an important part of my workflow, I eventually switched to Emacs. It was way more convenient for me, as I became a heavy Org-mode user.

Nowadays I use Org-mode for pretty much everything: Orgzly-related work, day job-related projects, and personal tasks. Anything from new Orgzly repository type support to buying fruits at the store, really.

Can we expand on that? We Org-mode folk love hearing about other people’s methods and hacks…

I don’t do anything too crazy. I used to store the majority of my tasks in three big org files (Orgzly.org, work.org, and personal.org) with different states, tags and properties. Search speed became an issue for me in Org-mode, mainly when filtering by planning times and properties, so now I have two files per area: one for active tasks and one for some day/maybe (a la GTD) stuff. I also have a separate Inbox.org file (again taken from GTD, the method I’m trying to follow as much as I can in Org-mode). This seems to be working particularly well with a mobile app such as Orgzly, since you don’t want to spend too much time thinking where and how to store some idea or a task when you’re on the go.

What do you hope Org-mode users get out of Orgzly? What about the non-Org users you mentioned?

For Org-mode users, my goal is that they feel as comfortable and efficient in Orgzly as much as they do in Org-mode. This is obviously a huge goal and the app has still a long way to go for that to happen, but I think it’s a useful goal to have since it helps to improve the quality of the app.

As for Non-Org-mode users, they should be able to use the app easily, without the underlying format of notebooks or any Org-mode-specific features getting in their way. They shouldn’t need to know what Org-mode even is, but if they learn about it through Orgzly, great.

What makes Orgzly different than the other task and productivity apps out there then?

Considering the number of apps out there, it’s pretty hard to stand out. Having notebooks in plain text and being able to sync them anywhere would be the main advantage I guess. Syncing is currently done through Orgzly’s Directory repository, and Dropbox is still the only way to do that directly.

Do you get any help developing the app? How much work and time do you put into it? Orgzly doesn’t have any ads or any other form of contribution, is that something you are looking into?

I try to work on Orgzly as much as I can, but I don’t do it nearly as much as I’d like to. It’s not always easy to find the time between my day job and personal projects. There are periods when I have a lot of free time to work on Orgzly, and there are times when I can barely get on top of my emails. Contribution on GitHub is great, especially small and tested pull requests which I can just merge immediately. There have been some larger projects done too. For example, there is a Git repository support currently in progress which I’m barely involved in, which is great. As for Orgzly monetization, my plan is to implement in-app purchases for the version in Google Play. (note: Orgzly is available in Google Play store and in the free F-Droid store - J.R.) I never considered adding ads (I do not like seeing them in apps). I considered accepting donations, but I prefer trying to have a long-term steady income instead, eventually.

Neven, thank you so much for agreeing to do this FAQ. Please keep up the excellent work!!

Thank you! -Neven

A Couple of Concluding Remarks

I have a few additional thoughts to share after heading from Neven:

First, It was intriguing to hear the Neven’s opinion about non-tech and non-Org users. In a way, he looks up to them as inspiration: the more they can use the app, the better the app is. This makes me think about the Org-mode and Emacs manual. It is very detailed, very informative, yet in the beginning it served me as a last resort only. That’s because it feels the Emacs was written from the “inside” by people who understand programming and Linux. Because of that I often couldn’t find what I was looking for and ended up Googling basic questions. I didn’t know what to ask. Today, when I have a better understanding of what I need, I can use the manual more often. But Orgzly was not created this way. I started using Orgzly before I started using Emacs, and one of the reasons for that was because it made sense from the start. The manual was intuitive, made sense, and small. I think Neven is on to something very important here.

Second, I find that a couple of hacks really help me work around Orgzly’s limitation. It is, after all, just an “Org-mode” light. It might be useful to highlight some of these here again:

Neven said that in a mobile app, you don’t want to think much about what information you capture and where to put it. This is something I very much agree with. I have Orgzly’s agenda widget on my screen for my tasks for the day, which a quick tap away from projects I’m working on at any given moment (since I go back to my office to refile and organized between tasks). The widget has a plus sign at the corner which quickly launches a new note in my default “inbox” org file. When I create a note, I often don’t type. Instead, I tap the microphone on my Android’s keyboard and dictate what I need. Even if the translation is 100%, it’s close enough that I know what I wanted to say when I’m in front of a computer. This is so quick it’s often better than using org-capture. When the note saves, it is automatically scheduled as a todo item on my agenda to the same day (an option in Orgzly) so that it’s in front of my face on the agenda and I can refile and schedule it as needed and don’t forget about it.

The second thing is Syncthing. Neven mentioned Dropbox, which works fine, but for the more privacy and space-aware folks out there, Syncthing is a godsend. I wrote about it in length before, so I won’t get into my system here. If Orgzly is what allows me to work with Org-mode everywhere, Syncthing is the glue that makes it possible. An update on my phone from the field will show up in a second on my VM at work and my Linux box at home, and vice versa.

Of Emacs and Chutzpah

Says Jamie T. Rubin: “As a writer, I naturally want to spend my time writing. More and more I see tools getting in the way of writing. If that wasn’t the case, why do so many tools now add a “focus” or “distraction-free” mode? What choices can I make to simplify my writing ecosystem?”

My answer to that, considering the theme of this blog and how I’m writing it, is obvious. But the question is going deeper than that, and so should be the answer.

Emacs is not accessible for most writers today. Most of them probably never heard of the thing, or any other tools besides Word and, at least the lucky ones, Scrivener. Emacs isn’t really known outside of the seriously-geeky folks because it’s written by computer geeks for computer geeks. Search for any instructions, tutorials and/or reviews of Emacs, and 99% of the folks you’d encounter would be computer programmers. No fiction writers. Chances are that if you put Emacs in front of a writer today, they won’t even know how to install the thing, let along use it.

I don’t think people who use Emacs understand how complicated and cumbersome it looks to the average person. They probably think we, Emacs users, are the ones who are crazy.

How come Word is so well-known (and usually hated) by writers for all its distractions, while better tools for the job such are barely heard of?

For one, you can consider marketing. The advertising Microsoft throws at its products is something the free-source community cannot even start to compete with. For another, you can consider history and reputation. Microsoft Word (and Outlook and Excel and PowerPoint) have been a core component of office software for quite a few years. It is even called, justfully so, “Office,” because what else would you use in your office?

But there’s another reason wannabe writers would not find out about Emacs, and if they do, never pick up the habit of using it. It’s the same reason wannabe runners won’t travel too far out of the gym, or people who wish to meditate won’t experiment other techniques beyond the ones they’ve learned in a Yoga class.

There’s nothing wrong with any of the three methods I grossly mashed together here. Running on a treadmill is a very good form of exercise, which can be measured and improved. It allows folks who otherwise won’t run because of the weather or time of day to still get their exercise. Likewise, there are writers who have been using Word for their entire professional career. And I can tell you there’s a lot to learn from a good yoga meditation class which will supply you with discipline and motivation to continue.

However, there’s a component missing in all of those. An element of the human spirit that, thinking back, can be described perfectly in one Jewish word I know: Chutzpah. If you’re not familiar with the word and just look it up on Wikipedia, for example, you won’t quite get it. The definition explained there is audacity, bloated self-esteem, and ego. This is all true, but that’s the kind of definition that makes you think of an over-confident obnoxious teenager. That’s not what I’m talking about. Chutzpah, the way I mean it here, is to dare. To know there’s a way hundreds have been doing a certain thing for decades, yet question it. Far too many people don’t question things.

One of my biggest inspirations to use Emacs and Org-mode was this YouTube video by Jay Dixit, a writer who kept saying he’s “not a programmer” and shamelessly admitted to copy code and ask others to write code for him where he needs it. Dixit’s Emacs methods are nothing too special, yet he presented this lecture in front of experience Emacs users (programmers even!) who asked him questions. Dixit is the kind of writer who dared to wonder about something very specific his software couldn’t do, until he encountered Emacs. When he realized there’s a “bit” of a learning curve to Emacs, he didn’t think much about it and started learning. Not because he’s a masochist, or because he has tons of times, or because he’s a genius, but because his desire to write the way he wants to was bigger than any alternations the comfort of the “this-is-how-it-is” software offered him. So he dared, persisted, and well, as far as I know, he still uses Emacs today.

Not too long ago, I took my first steps toward leaving Google Documents, which has been my go-to writing tool for a while. At the time, I haven’t even heard of Emacs. Google Docs was very convenient, especially on the Chromebook I had. I’ve been using Bullet points in posts for a while (as evident in the post mentioned) and Org-mode was something that “clicked” loudly in my head, with a few exclamation marks, and I just had to try it. Did I have Chutzpah? You could say so, I’d call it an obsession with writing apps. At the time, you never know. People who keep talking about who daring they are not daring at all.

Like many others who use Emacs today, I want it to be more accessible to the public at large. But, also like many other who use Emacs today, I realize that the “quest for Emacs” is not something you can throw at people through a marketing campaign. There’s nothing easy, pretty, or “sexy” about Emacs. You have to earn using Emacs through months of careful customization. My current setup, for example, includes these customizations from vanilla Emacs:

  1. My Window size is configured to be bigger than Emac’s default for my Ultra-Wide screen
  2. I use company-mode to auto-complete words
  3. I use Emac’s Abrrev (which I customized) to include common words and phrases
  4. I write posts in Org-mode and publish directly to GitHub through ox-hugo and magit
  5. I have visual lines mode on
  6. There are no icons in my tool bar (though I find the menu itself still useful)
  7. I’ve added many words to the built-in dictionary
  8. custom key shortcuts for spellchecking, refreshing buffer, and others

And there are more (Ivy, which comes with Swiper…) and still more to come. I use my Emacs so much and so often that it’s hard to break it apart back to the different pieces that make it. It’s memorized by my fingers more than my brain. It’s all a careful process of picking and choosing what I want and where I want it.

Emacs is not something you give to someone. It’s something they have to take and make their own. While veteran users of Word (or any writing software, or actually almost any software) could probably say they could make it their own as well, I’d argue that nothing requires customization like as Emacs does. that’s what makes Emacs a lifetime-long tool, and why (I think) after all these years, it’s still given out in a package that is pretty much the same.

All of the above is necessary (at least I’d like to think so) to answer how Emacs, with all its hundreds (if not thousands) customization options is not one of the “writing tools that get in the way of writing.” In Emacs, the things that stand in the way of your writing are things you haven’t changed or tweaked yet. In other words, the only thing that stands between you and your writing in Emacs is… You. There cannot be anything else because every single part of Emacs is designed to be broken down to do exactly what you want it to do.

Org-capture in Files

I’ve been pretty busy org-mode-ly speaking. There is a lot to say, and as I was writing my post, more ideas occurred to me that behooved me to stop writing and experiment more, which lead to more interesting results, which meant I ran out of time to write about the results. When I finally returned to my post this morning, I realized there’s so much to explain, I can’t include it all in one go. Here you go, part one of my latest adventure in org-mode: org-capture from org files.

Why a File as a Capture Template?

Why do I need a template based in a file? To understand that, I need to explain some of the work I do as a help desk person in a large organization.

Among other things, my work includes prepping desktop and laptop computers. For the most part, this is done from an image, and all software that is not included is installed remotely from our SMA. However, there are still many cases where the automation fails or does not apply and hands-on deployment is needed. Some scenarios include:

  • BYOD devices, which need to be evaluated and prepared for our environment.
  • Macbooks, which we can’t automate yet for political-human reasons.
  • Replacements of old computers, which come in different models and different usage (and abuse) scenarios.

These scenarios are complex but similar: a natural perfect place to use checklists. We’re a big company, and many times I find that checklists are half of the job. Communication with clients, managers, purchasing staff, as well as getting the right information from everyone and documenting it, is also a big part of the checklist. The difference between a job that was done with a checklist and one that was done without is so obvious that my checklists have been adopted throughout the team and I was asked several times to guide others (especially newcomers) because of “my” organization. I find this humorous because if you know me, you’d know I’ve always been far from organized. It’s all org-mode, to which I’m very thankful.

In the past, I’ve had a long checklist “template” in a header in “setups.org” file which I used to copy-paste into new projects. The idea of having a file as a template for capture occurred to me in the past, but because I’ve never seen a real-world example and didn’t see a clear reference to it in the manual, I dismissed the idea as wishful thinking.

Why Finding the Answer Took so Long?

I asked about using other org files as templates on Reddit in the past to, but didn’t get answers. I believe that’s because folks didn’t understand what I’m asking exactly. And that, in turn, was because I didn’t know what I was looking for exactly. After all, if I had a more concrete idea, I’d probably read this in Org manual for what it was:

template

The template for creating the capture item. If you leave this empty, an appropriate default template will be used. Otherwise this is a string with escape codes, which will be replaced depending on time and context of the capture call. The string with escapes may be loaded from a template file, using the special syntax ‘(file “template filename”)’. See below for more details.

This little paragraph of text can be found about half a page down in the manual. I read the capture part of the manual probably 20 times or more at this point, and I still feel I wouldn’t know I can use a file as a template just from reading it. But why?

There’s no clear statement that says you can load a capture from a file; rather, it states that ”the string with escapes may be loaded from a file…”. To me, this means that I can include my template’s definition in a separate file instead of specifying it in my init file. For example, here’s a code from my template:

  ("j" "Journal" entry (file+datetree "~/Documents/Work/Setups.org")
"**** %<%H:%M> about %a \n%?" :tree-type week)

So, if for some reason I wouldn’t want the above snippet in my init file, I could throw it into a different dedicated file. I didn’t bother with it because I’m comfortable with having my capture template “string with escapes” where it is. Further, the manual says “see below for more details,” and there aren’t really any. There are only details about what is referred to as “string with escapes.” There’s nothing that tells me I can have my whole template, huge checklist and all, in another file. And, as far as I know, there’s no other reference to files as template anywhere in the manual. So I just figured this cannot be done, unless I want to specify a whole checklist in the string above, such as [ ] checklist item one /n [ ] checklist item two /n and so forth. I also didn’t know how to tell org-capture to reserve the indent for the sub-lists I needed for some of the items on my complex checklists. This looked very cumbersome to do inside the code itself when I already had an org file with the checklist to copy-paste from.

But there was a note somewhere in my journal that said I’ve seen someone pulling a template from a file. I wrote down that I know it is possible. So two weeks ago I went looking around for this elusive option again, with a somewhat different attitude - I figured that if I find nothing, at least I’ll learn other people’s work around and could find a way to get what I want.

How I Finally Found the Solution

I searched online more aggressively this time, using different queries. I managed to find a couple of questions related to org-mode in emacs.stackexchange.com that were not directly related to my issue but had the following lines in the code (I cut out the paths as they do not matter here)

("j" "Journal" entry (file+datetree "...")
(file "..."))

And then, I found this talk by Jonathan E. Magen. Of particular interest was slide 6. Right there, in front of me, the title was “Template stored in file,” and below it, a very simple example that tells org-mode to read the template’s content from a file. It was as it is in the manual, but this time, the slide was very specific, tell me: “this is how you tell org to read from a file.” It was that simple. So simple that it was right under my nose the whole time, but I kept missing it because there was no clear example or scenario; it was another option that was mentioned briefly as a head nod, and here, someone finally pointed a finger it at for me. So, this was true. Org-mode can read templates from other org files.

I wrote the code as I understood it from the examples I found and added (file "~/Documents/Personal/journal-tmpl.org") to the code above instead of that “string with escapes”, and got an error: org-capture: Capture template ‘j’: Template is not a valid Org entry or tree.

I saw this in the past, I remembered. Somehow I did get this far before. However, when I saw this error then, it was yet another proof I couldn’t use org files as templates. After all, journal-tmpl.org was a good org file that opens otherwise, just not through capture. The conclusion then was, wrongly, that capture can’t do it.

Now, through the eyes of someone who’s looking at workarounds, my attitude was different. Org-mode, I knew, (or Emacs for that matter) should open any file if I tell it to, even if an mp4 or a JPEG file. I would see gibberish on my screen, yes, but it would open. It follows then that if I see this error, it means that capture should work, and what I wrote does tell it to open the template from a file, but something in this file specifically is wrong. Suddenly, this error became an encouragement that I am headed in the right direction. The error was telling me, “hey, I want to do this for you, but your file is messed up, sorry.” What I should have asked folks should have been “what’s wrong with my org template file” and paste the code in Reddit, but I didn’t know to ask that. Now, that I finally knew what the real problem was, I was in the right mindset to work it out. And work it out I did.

As it turns out (for reasons I’m still trying to I do understand now), my regular org file started with certain options like #+TITLE: and #+TODO: with several headers already nested inside of it failed to work. However, if I simplify the file down to the headers I want and the checklist itself, it did work. The reason for that, I know now, is exactly what the manual said all along: ”this is a string with escape codes, which will be replaced depending on time and context of the capture call…” Org-capture needs specifically org-capture syntax to work with, and the org-mode options at the top of the file specified above are not org-/capture/ syntax; they are org-mode syntax. I need to get them out of the way and feed org-capture org-capture syntax, just like I did in my init. As a matter of fact, as I now know, I can specify the entire syntax (or “strings with escapes”) under my first header and it will work just fine. I don’t even need to specify it in my init. Yes, this is what the manual said. But no, this is not what it said to me. I just couldn’t see it.

What Does it Look Like?

Here’s an example of one of my org files templates. Notice how the very first thing is the header itself. Turns out it must be the first thing in org-capture syntax. Then the second line specifies what the template does just like it did in my init. Finally, the checklist itself:

* Setup Proccess [0%] 
%^{Ticket}p %^{ID}p %^{Computer}p
- [ ] Rename (SLS-ID-MAC)
- [ ] Check OS Updates
- [ ] Encryption
- [ ] A/V
  - [ ] Install
  - [ ] Manage (overwrite XML file)
- [ ] SMA
- [ ] Restart 
- [ ] WiFi
- [ ] VPN 
- [ ] Another App
  - [ ] Install
  - [ ] Test
  - [ ] Test w/ VPN
- [ ] Asset Registration

The header has a percentage that changes as I fill in the checklist items. This allows me to see how close I am to be done straight from the agenda view. The second line tells org-capture to prompt me for properties for the ticket number, ID number of the person, and the computer name. Then I have a blank list ahead of me, which I can fill up according to whatever is already done.

This particular list is summoned by a very similar line like the one I have above regarding my journal. I just have a different file specified (say mac-checklist.org instead of journal-tmpl.org

Conclusions

The result is four checklists I’m now working with and tweaking to perfection, summoned directly from org-capture. Because I was now encouraged to work with different checklists all in one place, that meant I needed to look for a way to have “submenus” inside org-capture; that is, to tell org I want to capture a checklist, and it should then take me to a checklist capture menu, and then I wanted to choose one of my checklists. From there, I also learned of more org-mode capture options that completely changed how I saw my org-capture usage to the point of re-inventing it in my workflow. This is too much to include in one post, so I hope to specify on that soon. For now, the big lesson I learned, and I hope anyone who every starts using org-mode is this:

If you think something is possible, it probably is. One way or another, it is. Rather behind a few paragraphs of code or a mysterious hint in a manual, it’s worth exploring with an open mind. And for those of you who used org-mode for years, especially from a programming background, I hope this post serves to show that some of us struggle not with the code, but lack of good, clear examples. We need a story, a scenario, a reason to explore. Please give us more.

Quick Updates: June

Even though I don’t have any means of tracking how many readers I have on this blog, I can’t help but feel a little bit responsible for my audience. This is an attempt to see how a quick “here’s what’s up” post while the blog is otherwise idle.

So, a few quick updates about what’s coming up:

I discovered I can use .org files as templates for capture. This opens the possibilities to create highly complex templates, like checklists with several headers, accessible from capture. It took me a year(!) to finally figure out even though it’s dead simple. I want to discuss why this took so long, and how Org-manual could use more visual examples. I also want to discuss how I use these checklists at work, and why they’re useful.

As a result of the above, I am also changing my workflow in org-mode, or precisely, how I’m using my files. I’ve decided to scale back from my multi-file approach and use only two or three major files, and I’m in the process of testing this.

Mike Elgan writes about Google products that should be merged, and it made me remember how I used to have the same issues until I switched over to Linux and Emacs. People who use Linux have it better in terms of productivity. I want to discuss why.

Submenus in org-mode Capture

In my last post, I discussed how I (finally) found out that I can use entire org files as capture templates. This is a basic feature that works out of the box, but the org-mode manual doesn’t give it enough exposure in my opinion. Turns out it wasn’t just me either.

As I was expanding my checklists and learning more “trivial” org-capture features, I discovered more useful things, but ran out of time to write about them. It’s now time to get back to more “basics” of org-capture again for some helpful tips.

When you build your org-mode templates, it’s possible to create sub-menus for better organization. Another way to explain it is to think of “categories” in your capture.

Says the manual:

Keys

The keys that selects the template, as a string, characters only, for example ‘”a”’, for a template to be selected with a single key, or ‘”bt”’ for selection with two keys. When using several keys, keys using the same prefix key must be sequential in the list and preceded by a 2-element entry explaining the prefix key, for example.

The bold part was another part that I had to read several times to understand. I knew there’s something different because two letters are used, but my capture template failed to work the first couple of times. I figured it out, and here it is:

 (setq org-capture-templates
  (quote (
          ("s" "Manual Laptop Setups")
          ("sd" "Staff Dell Laptops" entry 
          (file+headline "/mnt/veracrypt1/Archive/OhSnap!.org" "Staff Dells") 
	  (file "/mnt/veracrypt1/Work/setup-dells.org"))
	  ("sa" "Staff Apple Laptops" entry 
          (file+headline "/mnt/veracrypt1/Archive/OhSnap!.org" "Staff MacBooks") 
	  (file "/mnt/veracrypt1/Work/setup-macs.org"))
          ("sm" "SLS-Mac" entry
	  (file+headline "/mnt/veracrypt1/Archive/OhSnap!.org" "SLS-Mac")
	  (file "/mnt/veracrypt1/Work/setup-SLS-Mac.org"))
	  ("sw" "SLS-Windows" entry
	  (file+headline "/mnt/veracrypt1/Archive/OhSnap!.org" "SLS-Windows")
	  (file "/mnt/veracrypt1/Work/setup-SLS-Mac.org"))
... 

Let’s take it piece by piece from the top. Keep in mind this is not the entire template, just the relevant part. If you just copy-paste it, it will fail (it’s incomplete).

First, as soon as I start the capture templates, it seems as if I am starting to create another one inside the first one. That’s what the manual means. In my opinion, it stumbles on its own words. Another case where an example (like the above) would go a long way. What I did is basically created a sub-menu for “Manual Laptop Setups.”

The result is that when I call org-capture, I get the following:

Select a capture template
===========================

[s]... Manual Laptop Setups...
[i] INC (my incident template)
[e] Event (my event and journal template)

I have more templates going down, but I want you to look at the very first one. That [s] with the three dots after it indicates pressing s will take me to a sub-menu of the capture template, which looks like this:

Select a capture template
===========================

s [d] Staff Dell Laptops
s [a] Staff Apple Laptops
s [m] SLS-Mac
s [w] SLS-Windows

You can see form how the menu looks like that all of these items start with an s but this time without the brackets. The brackets indicate what you can press now after you’ve already pressed s to get into the sub-menu you’re currently in. That is, d for Dell checklist, a for Apple checklist, and so on.

Each one of these sub-templates is a checklist based in an org file like I explained in the previous post. The templates are all org files (like setup-SLS-Mac.org for example, the third template) which are nothing but checklists like I pointed out in the previous post.

This way I can have an entire “category” of capture templates, with S for setup, without having a long list with letters that won’t seem related.

Basic SSH Security

They say a picture is worth a thousand words:

static/basic-ssh-security.png

This is my SSH server’s log, and this looks like a good time to talk about basic ssh server security.

Let’s back up just a bit for the whys and hows.

To access my personal org files from work (my journal for example), I use TRAMP with SSH. Why SSH? It’s rudimentary, supported out of the box, and relatively private. My work and personal tasks meet in my agenda, but I like to keep my personal resources away from work computers just as I prefer to keep work material away from my personal machine. So, I need to have an SSH server up and running. My setup includes a Raspberry Pi (cheap, reliable, good single-function device) as an SSH server and Syncthing hub and a router that has port-forwarding to allow incoming SSH connections.

Even though I used a different port than the default 22, it was easy enough for the script kiddie in the picture above to find and identify. This is not hard to do; all you need is nmap. In this case, it seems some of the IPs belong to an Amazon server in the Philippines, so it looks like this particular individual uses an automated environment to find and exploit unprotected users. So far, this sounds like a classic scenario.

I set down and decided to implement some basic security configurations I’ve should have had in place since day one. None of these are ground-breaking security, and I’m not an expert myself, but these are probably a good start.

My Setup

These configurations are available inside /etc/ssh/sshd_config. I’m using OpenSSH server. If you’re using my configurations, remember to delete the “#” at the start of these lines, as they are marked as comments by default.

  1. Port ####: Specify a port number. Never leave as 22. Go high up, I would personally start at the 8000s. These ports are less likely to be used and script kiddies are less likely to sniff these with their tools. Not exactly ironclad facts, but we need to start somewhere.
  2. ListenAddress: The idea is to restrict the IP addresses the server will listen to, but this is not where we actually do it (it failed for me). Rather, I use it here as a reminder for later. Figure out where you’re connecting from and stick to it. You probably don’t need to SSH from across the world; for this, there’s Syncthing and a laptop. Read more details about my approach here further down.
  3. LoginGraceTime 20s: In seconds (as in, 20s) this is how long the server will wait before it closes the connection. Leave low since you’re going to copy-paste a long, 30-character password you will never remember from password manager anyway (because you’re not going to use an 8-character password, right?? if you do, please stop reading right now and stop using SSH, you’ll be better off) and 20 seconds is plenty to do that. You could use a key, however, I chose not to use it for my setup because I keep using different machines (I should probably stop) and I figure I can change my password every other month or so. This is easy enough to do with a password manager.
  4. PermitRootLogin no: Never a good idea to log into your ssh as Root, you can always escalate once logged in as a user. Don’t let hackers login as root either. I can’t think of a good reason to allow root logins.
  5. MaxAuthTries 2: Yes, you get two tries to try the password before it locks out. Not clear yet if this lasts until you reset the server or if it’s a timeout-based. Either way, since we copy-paste a password, this is good security measure if someone’s trying to brute-force their way in from a specific IP address, like in my case.
  6. ClientAliveCountMax 0 and ClientAliveInterval 600: these work together, as per this article. The idea is to kill SSH when ideal. Here’s how it works: Interval is how long before the server sends a “hey, are you there?” question. CountMax is how many of these answered questions the server will accept. So, in the above, we are waiting 600 seconds which are 10 minutes, and then the server will send… nothing, so it will disconnect. It means we’ll get kicked off after 10 minutes of inactivity.

Restricting to certain IPs only

In additions to these, I also restricted the allowed IP range as mentioned above. To do that we’re using two files, /etc/hosts.deny and /etc/hosts.allow. This is discussed here, in option 2 (TCP wrappers). The system described here did not work for me as explained, and after reading into the instructions in the files themselves, I got it to work as follows:

In the deny file, we add the line sshd: ALL EXCEPT xxx.xxx. where the xxx is the first and second octet of the IP address we want to allow. This is usually good enough to include all IPs from a certain place, but YMMV. In my case, this range specifies a specific office floor in my work site (which is fairly large), which restricts access only to my office floor area. When I tested the connection from my Android, I could not connect using my carrier but I could connect from work since that’s the IP address I specified. This is an awesome technique.

Again, this works for me but may not work for you. For one, you might be using a laptop for work and have a wider range of IPs as you move around. For this, I would consider using the laptop itself to store the files and sync with Syncthing. You may also decide to use one machine, in which case you might want to allow all IPs, but use a PGP key, which is much longer to guess than a password and will automatically reject connections trying to guess a password. This is a preferred method to what I use, try to implement it first if you can.

Why do They Use a Mac?

Macs are evil. Macs are expensive pieces of toy hardware, which is often not up to par with what’s offered on the market (my personal biggest annoyances are the keyboards and the touchpad). Everything you do on a Mac you can do on Linux better… and so on. If that’s true, why are so many professionals in IT environments (which are otherwise Windows-based) use Macs? I want to see if my perspective changed.

For starters, Macs are consistent. Apple provides reliable, top-of-the-line support to keep things more or less the same for people with decade-long workflows. This is impossible to do on Windows. Many Linux distros offer similar long-term consistency (I’m a fan of openSUSE), but support is harder to come by. With Linux, many times you’re left scratching your head alone for days, even weeks and months. As a person with limited time, I have learned to appreciate consistency.

Macs also tend to be as simple or as complicated as you need them to be. For example, you could use Emacs on a Mac through homebrew or chocolate and “complicate” your workflow with Emacs to your heart content; on the other hand, if you’re fine with Todoist or Ominifocus, this is one headache less for you. These apps are a pleasure to use on an iPhone too, which brings me to the next point

Apple’s iPhones, love them or hate them, have been a seamless extension to Macs since 2007. I believe every tech Mac user also has an iPhone unless they specifically need an Android phone for development or something similar. Every app I use on my work iPhone is polished, relatively easy to use, and works as expected. I can’t say the same on Android apps, especially in my case where I sometimes use FOSS apps, which don’t come from the Google store. There’s also a big point against Google their consistency of supporting apps. Once you’re in Apple’s echo system, switching to anything else is painful and costly productivity speaking.

There’s still also the Apple brand. People save up to buy a new computer and it’s going to be a Mac, because it’s a Mac. It is still widely known to be a good computer (if not the “best”), and it comes bundled with support to match. I’ve heard many complaints from Mac users about their devices over the years, but they almost always turn around and buy another one when the time comes. That’s because they don’t do their research (not necessarily out of laziness) and Macs always come up as the default option. While other brands slowly climb up there (and Apple arguably loses credibility), it will take years before any other brands will enjoy the same reputation as Apple.

While I’ve been using Linux as a personal choice for years, I came to realize the reasons for this have changed. At start, it was mostly out of sheer curiosity and Microsoft-forced-updates trauma. Every computer I installed Linux on had a different distro and different flavored desktop. As time passed, I started enjoying the consistency and efficiency of running processes through the command line. If I wanted, I could find these two easily within macOS, but there’s another form of consistency at place here. I can potentially take any old computer and install Linux on it and have it run exactly the way I’m used to, be it next year or 20 years from now. I can’t say the same thing about Macs (and definitely never about Windows). As I’m looking forward, I believe I will keep preferring Linux machines, and if forced, will feel “OK” on a Mac. It’s pretty safe to say though that I’ll probably never use Windows again as a personal productivity machine.

Of Journaling

I’ve kept a journal (though infrequently) since my early 20s. A thick notebook with a table of contents on its first two pages made of a long list of dates. At the time, life was too chaotic to keep the habit uninterrupted. I’m not sure at what point I converted to digital form, but I flirted with digital journaling for years before it became my primary method of keeping a journal…

Interestingly, the old paper journals are the ones that survived and moved homes with me (for the most part) while the digital ones are lost somewhere in the cloud.[fn:1] I’ve tried many different applications, more than I can remember. My thoughts are spread across Google, Microsoft Evernote and Apple products, some defunct and cannot be restored. My journal today, in org-mode, is resistant to that - at least, that’s my hope. I enjoy keeping a journal here, on the same platform I use for blogging and organizing my day.

Compared to a written notebook, a digital journal offers a couple of big benefits:

  1. I can read my notes: my handwriting is horrible. English and Latin letters are not what I grew up writing (my early journals in my native language are not exactly neat either). This used to be a problem: Combined with spelling mistakes I’ve had early on, my writing was often a serious deciphering challenge. Digital text obviously fixes this issue.[fn:2]
  2. I Can write more: I have more room to write and I don’t struggle with turning the notebook around to fit sentences into corners. My hand gets tired writing for long period of times, especially if I get excited or angry and press harder on the paper. I can also type faster than writing by hand and fit much more content in.
  3. Organization and storage: this is a bit of a contradiction since I said earlier I lost my old digital journals. This is now fixed though. Digital notes are more organized (I save mine by week number in a year, more about this coming up). As for safe storage, my journal is now backed up in several locations and can be restored if needed. The same is not true for my physical journals which can get damaged.

But… I do I miss journaling on paper sometimes. Paper journals are 100% personal. The choice of a notebook, the way it ages and shows daily usage, the handwriting, cards/stickies/photos shoved and glued between its pages – things you can touch and smell – these are irreplaceable in a digital format.

For their birthday, I got my partner (who loves scribbling down their thoughts) a pack of limited-edition Field Notes with a couple of Parker pens. I was skeptical of the small size of these things at first, but my partner’s thrilled with these. They write vertically and horizontally (and other ways which make sense only to them); they add tables to organize their planning for vacations; the fact that you need a magnifying glass to see some of their side notes makes the whole thing look even better somehow. It’s a small package of pure personality, and it reflects my partner perfectly. I, on the other hand, used graph-paper notebooks for my journals for as long as I can remember because it just made sense. I used to align pictures with the blue guidelines when I pasted them in back in my paper-journaling days. These journals reflected my personality just the same.

I miss having something personal and un-technological in my life, especially as an IT professional. There are many ways I can explore un-technology, but I just like writing. Sometimes I just miss the feeling of the pen, and I have the itch to just sketch something quickly, without opening a dedicated program or having a specific goal in mind.

Here’s how my journal looks like today:

static/2019-08-29.png

As I was preparing this post (which took forever for other reasons[fn:3]) I went through several of my former posts. The overall “feel” of the journal is very welcoming. In org-mode each entry is folded neatly into its day, which is folded into its week number, which in turn is folded into its year. The whole journal, easily the largest file among my personal text files, looks like 10 lines of text when it’s first opened. You can see this effect above, where the entry on Monday is on a fourth level (you can’t see the year in the picture, which is the first level).

To be honest, I wouldn’t recommend org-mode to journaling newbies or for those unfamiliar with Emacs already; it’s like going to a culinary school to learn how to make a simple salad. There are far simpler software for this purpose, and if you want to pick on the habit of journaling in general, I would recommend a notebook first.

For me, a person who uses org-mode on a day-to-day basis to keep track of tasks and writing this blog, this is a no-brainer. It looks fantastic and functions even better.

Footnotes

[fn:1] Today I cringe at the thought that my most personal notes are buried somewhere deep in a company’s server. Back then, I didn’t know better.

[fn:2] Many writing applications come with spell-checking options which can be highly distracting. This is a habit that I believe started with Word’s red zigzag. It forces you to stop your train of thought and focus on the mistakes highlighted in angry red. Emacs doesn’t have this problem. Spell-checking takes place only after I highlight a segment of my text and ask for to spellcheck. Having spellchecking separate from the writing process is important! I didn’t realize how disturbing this can be before transitioning to org-mode.

[fn:3] These being mostly lack of sleep, insomnia of some sort.

Scripting in Bash 102

This is my first attempt at explaining a full (though simple) script I wrote. I always say I’m no programmer (or scripter). Despite that, I somehow ended up creating a couple of useful scripts on my Linux machine.

My elementary scripts are work in progress. I keep tweaking them as I learn and create new ones. I’d love to hear from experienced scripters just as I’d love to hear from those of you who never opened Nano before. Feedback is always welcome.

Newcomers: Few Basic Requirements

Before we dive in, a few scripting points to cover:

  • A script is nothing more than a text file containing a list of commands in Bash (Linux’s default shell). You can use any text editor you’d like to create the file, it doesn’t matter. I use Emacs.
  • Speaking text editors: if you use Linux (or a Mac), you have Nano built-in. Just type “nano” in terminal to bring it up. It has a slight learning curve with its weird key bindings, so Here’s a quick guide to get you started.
  • In Linux, it doesn’t matter what extension your file has. That world belongs to Windows and MacOs. You can save your script as “myfirstscript” and it will run fine without an extension.[fn:4]
  • What is important though is permissions in Linux, a whole topic in itself. In order to allow a text file to run as an executable chain of commands, you need to permit it to do so. To do this, type in your terminal “chmod +x [your script path and name here]”[fn:5] to tell Linux this is an executable file.
  • You can’t just run your script by typing it in your terminal and hit Enter. That’s because it’s not part of your system’s path configuration, which tells Linux where are the scripts and programs you can run are.[fn:6] You have to be specific and write out the entire path ”[path/to/your/script/script here]” or be in the same directory as the script and execute it with ”./[script name here]”.

Hopefully the above makes sense. If not, don’t worry about it for now, just try to follow the instructions.

[fn:4] If you ever write scripts to execute in Mac or Windows (say, in another program) you’ll notice these files has a “.sh” extension. But again, in Linux, this doesn’t matter. Later you will notice that the script creates a file with a .org extension. What’s this hypocrisy you ask? It has to do with the way Emacs is built. Emacs can open any file, but org-mode files (this is the “plug-in” in Emacs that opens org files) are identified by .org extension for sake of convenience. You can include a special line in a file that would tell Emacs this is an org-mode file as well, but using .org is just more natural.

[fn:5] chmod (change file mode) is a powerful and important command in Linux, outside of the scope of this post. You should check the manual for it (you should know how if you read through this post). This will take you down a rabbit hole regarding Linux file permissions, and you can read more about it here (one of many links available). Wikipedia also has a good section to get you started.

[fn:6] The path in Linux is a variable (you’ll learn about those in a bit) which contains all the directories where your Linux knows to finds scripts and commands. OK, but what does that mean? I can’t get into it here (because I’ll never get to publish this post) but enough to say that the script you’re writing is not a part of the “Linux default” commands package, so Linux doesn’t know it’s a command. Imagine telling a person who never shook hands before to just “shake hands.” You’d have to explain it, and then that person could remember how to shake hands. Linux’s way to remember how to shake hands (in this example) is to add it to the list of directories that include such constrictions. This list is the path.

The Script

OK then, here it is:

#!/bin/bash

filename=w`date +%V_%y`
init_mon=`date +%Y-%m-%d`

cp /media/pispace/Documents/Archive/weekly-template.org /media/pispace/Documents/Archive/$filename.org

sed -i "1s/^/#+TITLE: Week Starting Monday $init_mon\n/" /media/pispace/Documents/Archive/$filename.org

Keep in mind that the website layout breaks the source code artificially; the two segments toward the end, the cp and the sed, should be in two long lines. I need to correct this.

I’ll go line by line to explain what it does and hopefully how it works.

What does it do?

A practical little thing, this script creates a new .org file every week[fn:7] from a template and changes its title to “Week Starting Monday [date]” where the date is updated based on that week’s date.

So for example, on 02-09-2019 (at 3:00 AM specifically), my Raspberry Pi created a new org file. The first line in that file, which is the title, reads “Week Starting Monday 2019-09-02” (I like my dates in a yyyy-mm-dd format).

How does it do it?

Ah. Well, this is what this post is about. Let’s dive in:

[fn:7] Automating tasks in Linux are done by a different component, called cron. It takes a specific set of instructions written out in a string and translates it to a specific time loop (for me when using this script, on Monday, 3AM, every week). Cron is something I hope to expand on in a future post.

The Shebang

The very first line, #!/bin/bash, is called shebang (or hashbang, but shebang seems to be more popular). Every script in Linux should (though there are ways around it, it’s just good practice) start with a shebang. What does this cryptic line do? We’re just on the first line and it already seems like we need to learn another language!

No worries. Everything looks big and scary at first, that’s why you break it down to parts you understand.

The # sign is usually used to enter a comment into your script. This means this line is not meant to be run as a command and should be skipped. When combined with a ! it creates a special combo[fn:8] called the interpreter directive, which tells Linux how to interpret the script we’re about to write - or more precisely, where is the interpreter located so the computer can find it and use it to interpret the command.

Since we’re about to write a script in Bash, we need to tell our computer: “OK, this file is written in Bash, here’s where you find bash” which is exactly what the next part is: /bin/bash. this is where bash is, in your /etc/bash folder. If it was in a folder named giraffe, for example, it would be #!/giraffe/bash.

[fn:8] This special combo is called a “magic number”, a unique value in ASCII that the computer understands as a direct command. I am not sure how many magic numbers like these exist. Sounds like something interesting to find out.

Variables

This is probably one of the most popular phrases used in scripting. A variable (or var for short) is a container for a piece of data, usually called a string (string is one type of data, but for our purposes here let’s keep it simple).

Our script contains two variables: filename and init-mon. It makes sense if you look at how it’s written: filename=[something...] and init_mon=[something...], like saying my_name=Josh-Rollins, for example.

In Bash, as soon as we place a = after a name like that, Bash knows this is a variable. Simple. OK, so what exactly goes into these containers? We’re going to find out.

By the way, you can define variables anywhere in the script (as long as it’s before you use them, of course), but it’s considered good practice to write them at the beginning of the script.

The Date Command, and Reading the Manual

This is our first command. If you copy date +%V_%y and run it in your terminal, you’d get a number, an underscore, and another number. If I type this today (which happens to be September 3, 2019), I’d get “36_19”. Did you notice the plus sign before the options (these are the letters with the percent sign)? it’s important: in the manual, it says to use a plus sign when specifying a specific format to display.

We know a command named “date” is probably giving us date related output, and I just gave you today’s date… can you guess what this command does? What are these numbers?

To be sure, let’s run the manual command (man) for the date command. Type “man date” in your terminal. This is the manual for the date command (most commands in Linux come with a manual, isn’t this awesome?)

The most important bits of info to get from the manual are the name and the synopsis. The name tells us what the command does right there: “print or set the system date and time”, and then the description which is the same thing. Go ahead and run “date” without any format options (that is, without the “+%” something) part and see what it prints out by default; you’d notice it’s the same as specified under the “Synopsis” part of the manual.

In our case, we use the date command with specific formatting options. In the manual for the command, scroll down to “Format” to find these. Do you see how many options the date command has? You can print out the current century or even the number of seconds since the beginning of 1970[fn:9]. The options used in the script, %V and %y, give out the week number in the year and the year’s last two digits. The underscore in between is nothing but a separator that will later show in the file name, to get the following format: [week number]_[year’s two digits], which gets us something like “12_19.org”.

The other variable, init_mon, is another way to get output from the date command. Go ahead and try to figure out the options used on your own this time. Why do I need this second date? We will find out shortly.

[fn:9] Turns out that this date (1/1/1970) is known as “The Unix Epoch.” A quick search led me to this this discussion. Turns out such dates are common in the computing world… read and learn!

The Copy Command and Using Variables

the next line starts with “cp”. This is simply us writing out a command, nothing fancy. cp stands for copy in Linux, a command that copies files and directories. Don’t take my word for it, check the manual!

The command then says to copy my weekly org template (I talked about org files as templates previously) from the origin directory to the destination directory (this format, of writing the origin location first, space, target location, is also noted in the manual. You have to follow this order), as a file named… “filename”.org. And filename is the name of our variable, from earlier. We tell Bash we want to use the data in a variable (remember, it’s just a container) by writing a dollar sign in front of the name of the variable we want to use. I added “.org[fn:10]” at the end because - you got it - this is going to be a .org file.

[fn:10] When scripting, certain special characters (like our $ above) are reserved. This means that if we wanted to call our variable “$usd” for example, we couldn’t. There are certain ways to tell Bash we want to use the character as a character, not as a “special signal.” As a matter of fact, the period in my .org is a bit dangerous because the period also has a special meaning. I should have typed out something more specific telling Bash the period here is meant as just a period, not a signal – but at this point I’m not sure how the syntax would look like. I’m learning these things myself, after all.

The sed Command

The sed command stands for “stream editor”. This is one powerful command, which I’m only scratching the surface of here. It allows you to manipulate text in all kinds of ways, but probably one of it’s most popular usages (as in this script) is to substitute a piece of text with another piece (in Bashspeak, we call these strings)

We call the command, sed, with option -i which tells it not to produce output. Basically “just do it, don’t show me.” This is because we don’t want to see the replacement on the screen, we just want to manipulate the text.

The rest looks a bit crazy, but hang on, it makes sense:

sed: Using Quotation Marks

We’re going to use the quotation marks to include our entire stream and options (you can see it ends at the very end of the line). It’s our way of telling the script to take “this” where “this” is everything included in quotation lines[fn:11]. We need to use it here because our substation includes spaces, and these usually interpret as a workflow instruction. Remember the cp command, and how it uses space to differentiate between the origin and the target? Well, something similar happens in sed, so if we just include spaces without the quotation marks, sed will do something else.

[fn:11] Those of you who look carefully might find yourself asking, “OK, but the quotation marks include the command syntax, not just the text we want to use, what’s up with that? Why isn’t it 1s/^/"#+TITLE: Week Starting...”? And I have a good answer: I don’t know. It doesn’t make sense to me either at the moment, but in all documentation I find, this is how the syntax works.

sed: Selecting the Right Text

Next, we have 1s/. This is actually two in one combo. 1 for first line and s which tells sed we want to do a substitution. Then we have a forward slash which is how we tell sed this is the expression we want to replace. In other words, we are selecting the text from here going forward, until the next forward slash.

Now wait a second. Didn’t we already use quotation marks to tell sed what parts we want to work with? Kind of. Not really. The quotation marks acted as a wrapper for the whole expression, the text we want to replace (which is missing in this script, I will talk about this in a second), the text we want to replace with, variables… the whole shebang (sorry, couldn’t resist). See, quotation marks work in Bash as “wrappers” as they do here. The forward slash, on the other hand, is specific for the expressions inside the command here, sed.

Think about it like a sandwich: when you order one you get it in a wrapping paper and a plastic bag. You don’t eat those, that’s just how you carry it home. Once you take it out, you still have a sandwich, and this sandwich includes the good stuff inside. The quotation marks are the plastic bag and the wrapping paper, while the slashes represent the slices of the bread. you eat those, they are part of the “food” command, the bag and paper are not. Both act as wrappers, but for different purposes.

The last part of selecting the text is the caret (^) sign. This is a regex expression (short for “regular expression”) which says “go to the very first part of the line”. Regex expressions are a whole world of their own, a powerful way to explain text strings to the computer. I explored a bit of regex earlier if you’re interested. It’s a good example to show when this comes in handy. Combined with the 1s from earlier, it tells sed to select the first line, at the beginning.

sed: Replacing Text and Placing in a File

Above, we went over how to direct sed to the right text we want it to replace, but we didn’t tell it what to replace, and what to replace with. This is what’s coming up next.

Remember how forward slashes represent the pieces of the sandwich for the sed command? These are called delimiters. Sed substitution defines our sandwich like so: “replace /this/ with /this/.” The syntax looks like /this/this/. The first part tells sed what’s getting replaced, the second part what it’s replaced with.

In our script above, we told sed to replace the text with nothing at the beginning of the line (there’s nothing there after the ^ sign) with ”#+TITLE: Week Starting Monday $init_mon\n/”. Because we specified nothing as what we want to replace, sed will simply replace the whole line. It won’t search for anything specific. And to make sure it starts right at the beginning of that line, we specified the carot from before.

If you use org-mode like me, you’d recognize this bit of text: it’s org-mode’s syntax for specifying a title for an org file. So, our sed goes to the very beginning of the very first line and replaces the entire line there with the “#TITLE…” line.

You’d recall from before, where I discussed variables, what the dollar sign is: we’re calling our init-mon variable here, which contains the full date every Monday: The title is “Week Starting Monday ” and then the date as I explained above.

Then, we have a special bit of regex again after we finished the replacing job (the forward slash after the variable): \n. This means, “start a new line please” - just like pressing enter on your keyboard. And… done, we just finished our sandwich, wrappers and all.

Finally, we have space (it’s a new line) which specifies the target of the whole sed command. This is where I specify the file where this line of text should be added. In our case, the file we copied from our template above. So the sed command takes a generic line in the template that is served as a title holder (I simply typed in “#+TITLE: Week Starting Monday ----” but it could have said “pink rabbit” or simply nothing, doesn’t matter, since this entire line is replaced) and replaces it with what we’ve done here.

sed: More About the sed Command

I’ve used different sources when I wrote this post, and I’d just like to mention a few in case you’re curious and want to go down the many rabbit holes of this awesome and complex command.

First, there’s the GNU manual for this command which goes beyond the man page. Just so you get an idea, it’s .5MB of a PDF file with almost 40 pages. Don’t say I didn’t warn you.

Then there’s this excellent tutorial that came up first in a search. It’s long and thorough, with a touch of sense of humor. A bit more advanced.

If you want to read up more about regex, I found this as a helpful reminder.

And then you can always use stackoverflow for specific questions such as ”What does sed -i option do?

Conclusion

That’s it. If you followed along, you probably wondered where’s the part that automates the whole thing, so I get it every Monday. The way it is now, I have to remember to run the script every week. What’s the point in that? I mentioned in the footnotes, the automotive part is cron, and it will be discussed next time.

You may have more questions now than you had before reading the post. That’s a good thing - you now have specific questions which are more likely to give you specific answers. I hope to get many of these questions myself so I can update and modify the post to help more folks. Linux and Bash is a wonderful thing. You get all this power to automate and create things completely for free. I spent over a week writing this post, and one of the reasons is that I kept getting distracted by “why is that?” and then looking for answers. The research is one part of the fun, sharing it is another.

Thank you for reading!

Footnotes

Tools in Windows

My first public video introduced me to new challanges I didn’t face until now. Case in point: displaying milliseconds in VLC. I looked online in various places for a solution with my tools of choice, VLC player and ffmpeg. Workarounds do exist, but not nearly as simple as what exists for me in Windows. This happened to me before. As I was starting to get stuck, I realized that sometimes it’s ok not to wrack my head over every single hurdle. This is a reminder for myself, and hopefully save you some time to.

Millseconds? Why?

While going over my video and deciding on parts to cut out (you guys don’t need to see me reading from my notes and going “uhhh, that doesn’t make any sense…”) it became clear that the cuts are sharp and unpolished. I start to say something, and then the video cuts to me looking at a different direction starting a new sentence. It’s jarring and makes me cringe.

FFMPEG allows sepcifying which sagments you want to isolate down to the millisecond to prevent this from happening. Figuring out the point of cutting percisely on that level though proved to be too hard in VLC. I’m sure I can tune VLC to move frame by frame and somehow display the exact timestamp I’m on, but after a day and a half of looking into different solutions and two extensions, I gave up.

Windows and MPC-HC

Media Player Classic: Home Cinema is exactly what I needed. I remembered it from my pre-linux days, when I used it to watch different exotic video codecs that would not play on any other player.

It’s a lightweight tool that doesn’t do much else (as far as I know) besides playing videos and, if you right-click on the timestamp and choose “high precision”. Bang. A day and a half of looking into complicated solutions resolved in a single rightclick. The fact that I sometimes record and always edit my videos on my Windows desktop (which is my much stronger gaming-oriented machine) helps.

Unfortuantely, WPC-HC is discontinued. After all, who needs such a niche product when you have something as rubust and full-featured like VLC…? So while it runs great for now, I probably need to keep my eyes open for a replacement (any recommendtations are warmly welcome).

But for Everything Else, Linux is Awesome… Right?

For the most part, yes. But there are a few exceptions.

Another good tool without a Linux alternative that jumps to mind is Greenshot. Yes, I blogged about Flameshot before, but I since stopped using it due to an annoying locale issue deep in my Manajro’s configuration. It’s replacement, Ksnip[fn:12], is not as good. While I can forgive flameshot’s lack of features in favor of lightness and elegance, ksnip is way “bulkier” but lacks many good features compared to greenshot and a good amount of polish.

While I’m at the graphics department: another great tool in Windows is paint.net. It’s a light photo-editing tool which walks the thin line of “photoshop light” like no other app I’ve used. There’s GIMP of course, but I never use it. It comes with its own learning curve and plethora of features I almost never need beyond my Android’s Snapspeed (here are two recent examples).

I’ll skip the loaded issue of gaming on Linux, since the fact that I have a Windows desktop built mostly for that speaks for itself, and continue into other forms of entertainment: hosting my movies, music, and Ebooks. Linux does have the top hand when it comes to running a good server, but installing PLEX and managing file permissions for a NTFS hard drive shared with Windows is an experience I’m willing to pass on. When I eventually graduate to a NAS, it will probably be Linux based.

So this mostly leaves us with Windows for entertainment and video editing, Linux for productivity and work[fn:13]. Sounds about right to me.

What about you? Do you use different systems side by side like this, or are you the laptop warrior kind? Maybe a Windows guru? A mac-all-the-way person?

[fn:12] Seems like this project has been killed in favor of KimageAnnonator, which looks exactly the same to me.

[fn:13] Actually, since I use Backblaze to backup my files, my Windows machine is also my dedicated “backup server-client” as well. However it’s my Raspberry Pi that does the automation and uploads a compressed backup file every night without fail. You can’t beat Linux when it comes to automations of “set up and forget” like these.

Footnotes

Blog Updates Posts

As I wrote the October Updates post, I came to realize that having a “what’s new” kind of post regularly is a good idea. Usually, updating one thing causes me to want another change or to break something that worked. These are learning experiences that I believe could be beneficial other bloggers out there (I know they are to me). For those interested, here are some of my thoughts on that.

I’ve discussed my general plans after passing the Security+ exam on my video. Unlike what I believe is the generic direction most folks would take at this point, like advancing their career, going further into information security, higher salary, etc., I wanted to take a step back and focus more on my website.

The desire comes from the recognition that studying for the exam put many other things I wanted to explore on hold due to lack of time and wanting to get this exam over with. Toward the end of my studying, I even stopped caring if I pass or not. I just wanted to be done. I don’t want to have this attitude toward my interests, but this is what happens when you’re trying to push through a deadline.

I enjoy writing and explaining technology, so my website is usually a natural playground for constant development. I’ve taken large steps forward with Hugo and ox-hugo, which allows me to write my post directly from my favorite writing environment: org-mode. Since then, I’ve added a few bits and pieces here and there, mostly to CSS and layout (I call these “cosmetics”) and some functionality with Disqus for comments and tags to categorize my posts. All of these, to one extent or another, were new or introduced a higher level of complexity that I’ve dealt with in the past.

Going back to work on my website was slow at start, but quickly became more involved when I encountered a few problems. First with simple CSS issues, later with Hugo, these issues had me looking more critically into my site, understand Hugo and its components, and remind myself how things worked. Hugo is a very powerful tool with a simple way of doing things, but it’s not always easy to get into its mindset. Like a skill that requires honing, if you don’t work at it often, you lose the sharpness of it. It’s not a matter of writing how to do things down either: this state of mind is different. It’s one thing to follow instructions blindly to get things to work, it’s another to understand how it works and why. To me, this is a good feeling, a certain “high” when something clicks, a good motivation to keep on learning. It is one of the reasons why I enjoy working in IT and constantly be exposed to new things.

The October Update post (and those coming up in the future) is dynamic. As I work on new features, I’ll try to explain what I’ve done and how. I hope to get more engagement and ideas for those reading.

Blog Updates: 2019-10

I’ve been fairly active with the website this month, but most of the changes are behind the scenes.

Tags to Topics

The “tags” title changed to “Popular Topics”. This list now displays any topics that have been featured at least 3 times on TAONAW from the highest to lowest. A new “All Topics” link at the bottom leads to the tag page (which I need to change to “topics” now - work in progress) to view all the other topics, sorted by latest addition. I am planning to add an introduction the page later on, inspired a bit by Karl Voit.

The “Sections” portion of the sidebar will be was eliminated. More personal posts are now tagged as ”life”. As a preparation to go sections-less, the tags were cleaned and condensed to reflect broader topics. Popular topics that emerge will receive their own tags as they become more popular.

Cosmetics

The sidebar is now 14rem wide to contain the “Popular Topics” heading without a break. This caused the words “Art Of” in the red title at the sidebar to come together, something I intend to changed. I learned that it’s probably smarted to increase the spaces between words in the title using the word-spacing CSS property instead of changing the width of the sidebar.

Post titles have a new font, Montserrat. If you don’t see this font, please let me know. This change was implemented previously, but later deleted by mistake in the CSS file. This is a Google Font that is enabled for free using the head_fonts.html in Hugo.

Colors for the tags under the titles and the “Read More” links on the main page were changed to the same red as the Title’s red. Other links in the posts and remain the same[fn:14].

[fn:14] This was harder than it should have been… these links are nested inside a class which is also nested inside a class. The right syntax in CSS is [parent-class] [child-class] [html-tag] with spacing being important: three spaces, no more. However, because my CSS was messy, the rules I’ve added were re-defined again later in the CSS file; I wasn’t aware I called the same elements in two different places. As it turns out, cleaning your code (CSS, bash, or whatever) is important - and so as adding comments.

Fixing the Homepage

My homepage, which displays a list of summaries of posts, broke after a recent Hugo update. That was because of a specific Hugo change in version .58. This was fixed after a day of diving into Hugo, something I should probably do more often.

Trimming Down the fat

As it turns out, there are a lot of unused files inside the builds of this website. For example, when I changed my tagging and got rid of the old tags, it turned out the old tags still have HTML files inside the site. The original theme I used to build this site is still stored intact inside the theme folders, +and I don’t really need it+[fn:15]. There are other examples, probably one of the known side effects of having a static website that builds HTML webpages for everything.

Clearing the old files is tricky, and I ended up deleting files I didn’t know I need until I tried to publish the site again. Not a big problem since everything is backed up many times over, but something I can live without.

[fn:15] When I tried to publish the site to Netifly (see below) it turned out I do need the original theme of the site. I am unsure why at this point, since all the essential files I use are stored outside of the theme’s folder. I believe this has to do with the site’s theme setting which specifies the theme’s name, which probably points to that folder.

Netifly - Coming Soon

As Kaushal Modi, my Hugo mentor pointed out, the answer to all these annoyances is Netifly. I knew about this service previously, but this time because of a specific issue[fn:16] I had with git, I learned that Netifly bypasses the entire build Hugo creates. That is, it reads the md files directly.

Because the md files are created on the fly for me thanks to ox-hugo, it’s almost as if I have a dynamic website. I would still need to push the changes to github, but Netifly would take care of everything from there. No more old dead HTML files or forgetting to build my website before pushing to github - something I do more often than I’d like to, for sure.

Getting a website to build on Netifly is pretty easy, but I’ll need to change the DNS records to point to Netifly’s servers instead of github. That’s not a hard thing to do, but there’s a chance something would break and my website will be down, and I’m running out of time as I’m writing this. I will wait on Netifly a bit longer, but this is a change that is pretty sure to happen.

[fn:16] I included the word “hugo” in my git ignore file. This was because I used to store a hugo binary in the site’s root folder, because a newer version was not available on Linux Mint, at the time.

About Section

I’ve added an about section to the blog. Creating a section that uses the posts’ template was a learning experience since Hugo’s default is to treat all sections’ main pages (index.html pages) as a “list” template. To resolve this, I needed to use layout: "single" in the front-matter for the index page. In ox-hugo, this is done by adding a ~ :EXPORT_HUGO_LAYOUT: single property under the header.

I’m still not sure how to fit the link to the about section in the sidebar. For now, it’s just a linked from my name in the site’s description.

Footnotes

Too Many Notifications

The fan noise that came from my Windows machine finally got on my nerves, so I turned off my PC. I turned it on the next day, not thinking much about anything, my coffee at hand. As soon as I logged in I was greeted with windows opening on me notifying me on updates, of new games sales, of emails… “click me! No! Click me!” Yeeeesh. I put my coffee down and shut down my Windows PC off again. How did I get to this point again?

Windows allows all kinds of apps to be launched at login (autostart) and throw notifications at you without asking for permission. Because my Windows PC is used for gaming, the Epic store announces free new games, and Steam’s client keeps updating every few days for whatever reason. NVIDIA’s updated includes bloatware, wrapping new driver updates with news bulletins I care very little of. Windows itself informs me every time an update is waiting for me to restart, which I do as soon as possible because I don’t trust the system to wait for me as it claims to do: if it decides it’s time to update then it’s time to update, then it’s time to update, and there’s nothing I can do about it. Even Chrome browser launches automatically with Emby, and with it all the notifications built into Chrome.

I’m not new to this and I shouldn’t be surprised, yet, somehow, this behavior keeps sneaking up on me bit by bit until it’s suddenly too much. While Windows is notorious for this kind of behavior, things in Linuxland are also getting out of hand. In particular, Firefox and KDE-Connect.

Take WhatsApp for example, which I have to use to communicate with 95% of my friends and family[fn:20]. It’s open in Firefox, where it’s quick and convenient to type messages while my phone is charging, so Firefox informs me of new messages coming in. My phone, connected by KDE-Connect, also sends notifications. And if I ignore these because I’m busy, my Fitbit ionic vibrates on my wrist with yet a third attempt to grab my attention.

This all seems annoying and unnecessary as I write this, but during the last two weeks, it was vital. My dad was in the hospital undergoing surgery, and my family members across-seas were talking to each other in a chat room. I wanted to be involved, but not to the point where WhatsApp will show on my browser at work since my privacy (and that of my family) is important to me. Because running around a lot is a normal part of my job I don’t always sense the phone vibrating in my pocket. The watch was the best option to be notified. The big hoopla is mostly over now, and it’s time to tune it down.

Another good example is email. I moved personal emails (family and friends) to Protonmail about a year ago: I wanted to give Google a little bit less info to work with and encrypting communications with important people give me a warm fuzzy feeling. My main money/health/adult-stuff email though is still Gmail, while Outlook has autonomy at the office.

I currently use Thunderbird for my Gmail, and I find that the lack of notifications is potentially a problem: if one of my apps happily subscribed me again automatically, I want PayPal to inform me I was just charged. The occasional newsletter from Bandcamp is something worth skimming for good new music I re-learned to appreciate. However, I get no notifications unless Thunderbird is running. This is good because the thought of being notified of every single email makes me want to puke, but it’s all or nothing kind of deal which is not that great.

I know I need to get in there and tailor my notifications to my specific needs, but who has the time to do that and keep things under control? I know I don’t. I’m on the verge of contaminating my peaceful Linux environment with notification and information overload I don’t want.

The solution for now seems to be in Gmail. I haven’t tweaked my filters and labels there in a long time. It should be easy enough to either tame Gmail what’s important or at least create an automatic workflow and then worry only about specific labels - these can translate into folders in Thunderbird, and from there, be notified just for that folder. I can see this work.

At least in Linux information overload is my choice, not some operating system’s default behavior.

[fn:20] In communications apps there’s not much of a choice: you use what your friends and family use, and that’s it. You can sing the praises of Signal or what-have-you as much as you’d like, but you’d end up talking to yourself.

Footnotes

A new Camera Instead of Pixel4

When Google was about to officially announce the new Pixel4, I was excited about getting a new phone with an improved camera and enhanced night vision. However, not even a week later, I ended up with a digital camera instead. A few weeks later with my camera, I am happy with my purchase and the new things it brought into my life.

One of the important lessons you learn working in IT is not to jump the gun. You always want to stay a generation behind with full maturity and support. When it was time to trade my current Pixel2 for the new Pixel4, my urge to get a new one already took a hit from the cost. The more I waited the more problem surfaced like the face Unlock issue, or the not-exactly 90ghz display thing. The list of “mehs” with the Pixel4 doesn’t seem to show signs of slowing down.

Holding back on the Pixel4 transformed from excitement to “wait, why do I want to give them nearly a grand for?” I was pushing myself toward the Pixel4 despite everything I’ve heard, and the reason for that was the camera. I’ve been enjoying my growing photographs collection and the bit of artistic escape I got in my otherwise fully-tech-related life.

So if all I want is a better camera and to take better photos… Why not just get a good dedicated camera without an $800 phone attached to it?

Within minutes I was already watching a YouTube Video about the four different types for digital cameras (by the way, Dave McKeegan has a bunch of very well-made explanatory videos about photography). Hours turned to days as I swallowed everything I could from YouTube and photography websites. When I felt I got an idea of what would work best for me (a generic-purpose mirrorless camera) I posted on Reddit and settled on the Sony a6000.

images/20191128.jpg

My decision was based on a couple of factors:

  1. I wanted a cheap camera, yet still capable of taking good photos and one that would allow me to continue to learning. Likewise, I wanted a popular camera with a lot of support and tutorials available.
  2. I wanted a light compact camera that will be easy to carry around and fit in my backpack so I can easily take it anywhere. I didn’t want to worry about carrying a heavy bag which will demotivate me from carrying my camera and taking photos.
  3. I was interested in capturing videos as well, mostly for my vid journals and possibly videos for broadcasting.

The a6000 answered all the above. Five years on the market, it is still widely sold, but old enough to be cheaper than its newer replacements (the a6300 and the a6500). It’s small and light, easy to cary, and most of all, it teaches me a lot about photography.

Because I was about to leave on a vacation to the Bay Area in California, I had a limited number of days ahead of me. To make things more complicated, I friend from work suggested to bring over his Cannon 7D. He was considering selling it to me since my sudden enthusiasm in photography surfaced and suggested I take it for a spin, so I did.

The 10-year-old Cannon camera[fn:21] came with a full-size backpack full of lenses of professional-grade which I got to play with over the weekend. I didn’t know what I was doing exactly and I had limited time to try all the goodies, so after a weekend with it I decided to give it back to him and stick to the basics with the a6000. I’m happy I did. The Cannon 7D’s plathora of professional lenses were way over my budget and my budding skills. The next weekend I was already in B&H, and a couple of hours later, I got my a600 with a starter kit and an additional 55-210mm lens.

Three weeks later, I’m happy to report I managed to carry the camera with me to work every day. I take photo-walk lunches whenever possible and snapshots of different things I find walking: dogs in dog parks, birdhouses, trees and color-changing leaves, co-workers, myself talking excitedly about photography (an interesting side-affect: I got myself familiar with OpenShot again as an interesting side-quest). I used to take walks on weekends, but now I do so with the camera and it makes me examine each object from different angles and situations that I didn’t do before. It’s a lot of fun, and I’m excited to do something more artistic with my time.

[fn:21] One difference between cameras and smartphones that I learned early on: there’s no huge difference between a 2 year old, a 5 year old and even a 10 year old cameras. I’m sure for professional photographers there are some important factors to consider, but at the same time I have a feeling that folks who have been taking photos for years know how to get great photos from old and new cameras. With a phone, you’re expect to buy a new one every 2 or 3 years, with cameras, it’s more about investing in what you already have.

Footnotes

Happpy Thanksgiving

My family and I (those of us in the US, that is) have a micro version of Thanksgiving. There’s no Turkey (we’re not exactly big on meat), there are no large family encounters that break into their homogeneous splinters as time progresses. I usually bring my one partner with me over to my sister’s, where we thaw our awkwardness next to the fireplace and watch my little niece run around like a leg-seeking missile, homing to the nearest standing person.

This time around I’m going to bring my new camera and for the first time, I’ll be snapping pictures of people. Up until now, I aimed my lenses at plants, squirrels, and birds. It’s been a welcome distraction from work, which has been particularly reiterating for the last month.

I find that it gets harder to sit down and write when I’m away from my writing for long periods, yet, as soon as I sit down in front of the soft beige window of my Emacs, the writing comes to me. There’s also a difference between writing in my journal vs writing for the blog. The first is more raw, quick, with less caution. The second is slower, descriptive, with a dose of metaphors.

Writing like I’m writing now, without an outline or a specific goal, is risky. Many times I end up without a message to deliver, much like a passenger boarding the train for the ride, not for the destination. Sometimes, this is OK. There’s something nice to these posts, they tend to be more personal and inviting. There’s more life between the lines of text, and the paragraphs seem to connect better.

I do have more things I want to write about, but much like before a run on cold weather, I need to stretch first. I want to check the workflow, from my typing fingers through org-mode through Emacs through GitHub to webpage you’re hopefully reading now without an issue. There are parts of me participating in this that need to be re-tuned before I go on the next big topic, and this is a warm-up.

This then seems like a good place to stop and wish you all a happy Thanksgiving. I hope you enjoy the little moments with your family and friends and stir clear from black Friday deals of more stuff you don’t need. Let me know how yours went if you feel like it.

The Long Goodbye to Windows

The idea of switching “Completely” Linux has been in my head since I wrote about having too many notifications thrown at me whenever I log into Windows…

Wait. Isn’t this entire blog written from Linux? Haven’t I already switched to work from Linux? Almost 100% of the posts here concern Emacs, org-mode and Linux. “What do you mean”, you’d ask me, “switch completely? Haven’t you already?” Yes. Kind of. Not really.

Let me back off a second, and tell you about Hedwig.

Hedwig, for those of you who don’t know, is Harry Potter’s owl. In the movie series, it is a large snowy owl that delivers Harry his mail. It is an obedient, yet confused owl. It’s a bit of a mess, but you learn to like its character and weirdness.

images/snow_owl.jpg above: a snowy owl, straight from the books

My electronic devices (phone, laptop, and desktop) are usually named after the owls from Harry Potter. The Raspberry Pi which syncs all my org-mode files is called Hogwarts; the desktop is Hedwig, the phone is Pigwidgeon (pig), the laptop is Hermes. I am not sure when I came up with this naming scheme, but my liking to owls and Harry Potter clicked some time ago.

Just like the “real” Hedwig, my Desktop is a large (full tower, soon to be full ATX) machine. It is somewhat “confused” about certain things, like the time (I replaced a CMOS battery a couple of times), the state of Windows updates (aren’t we all), and in general, its identity. Hedwig’s been through Windows, Linux (various distros), and even a full Hackintosh at one point. It’s the first computer I built completely from ordered parts, the case, motherboard, PSU, CPU, GPU (all the Us), optical and hard drives. This happened about 6 years ago, before I switched careers to IT.

Since the beginning of December, Hedwig is again a duel-boot machine: Windows and openSUSE Linux. I decided to switch to openSUSE from Manjaro (which I use on my laptop) because I’ve been using openSUSE at work on my VM and it has been rock-solid; I also grown to like YaST which helps when I don’t remember how to change a setting from the command line. The idea is to eventually throw Windows out completely, and interact with it only in a VM on a full Linux machine. This is something I tried to tackle in the past but failed to do in the days before Emacs and Steam on Linux. Now, with my few favorite games available through Steam[fn:22] and Emacs leaving Windows and Office tools in the dust when it comes to file manipulations and writing, I think I will stay committed.

But I am writing these lines on my laptop. Hedwig’s fresh Hugo installation, which I use for this blog, is not working yet. This is just one example of many: Redshift, Linux’s trusted version of f.lux (or nightshift, if you speak Apple), veracrypt, which is a cross-platform encryption solution I use daily, and various codecs required to watch videos and compress videos[fn:23] with ffmpeg all require special configuration.

I’ve grown accustomed to these kinds of workarounds (Manjaro had it’s own) on Linux. As much as I love the productivity and stability it brings with it, configuring a machine to work in Linux the way I want it to is never a quick process. In contrast, when I had to restore Hedwig to Windows in the past, all it took was signing into my Microsoft account; withing a minute, all my configurations including the wallpaper was in place. But that’s the price you pay for having Microsoft (or Google, or Apple) having a probe up your behind and knowing everything you do. This eerie feeling of “we know what’s good for you” is built into every OS, and exists outside of the technology word alone.

Everything fits into nice neat boxes, and so do you, so shut up and move along, I feel they tell me. Even though I find that I have less and less patience for folks who invents “new ways” and think “originally” these days, I also have less patience toward those who tell me to stick to the common ways of doing things. If we are all special none of us is unique, right?

For now, I find that overall there’s little difference in my workflow from Manjaro in openSUSE. Most of my more per-machine tweaks happen in KDE, my preferred desktop environment, and Emacs’s file paths which I still can’t get to be the same when encrypting (would it kill someone to have the same mounting point for veracrypt in each distro?) Overall, I hope to stick to one distro so I can simply install from a pre-configured backup whenever I need to move to a new computer or a new VM. Using one (or two, one for work and one for home) init files for Emacs would also be great instead of having 5 slightly-different versions floating around.

But. While this sounds ideal, I don’t think it’ll ever happen. That’s because I find that I like the little differences in how each machine looks and feels. A small XPS running Tumbleweed, openSUSE’s rolling release, with white-on-black theme, neon-colored panel with Late and a top panel for system notifications to imitate a Mac like all the cool kids fits, while Hedwig is far more “responsible” with SUSE’s Leap and a traditional black-on-white theme. I need at least a bit of personality on each machine. That’s the bit the creative non-conforming dude inside the IT guy in me. Or perhaps I’m just nuts.

[fn:22] in case you’re interested, these four games are currently Cities Skylines, Company of Heroes 2, Oxygen not Included, and Darkest Dungeon. Over time (and age, I suppose) I discovered I’m less interested in FPSs and more interested in detailed-oriented games. I all-heatedly recommend you take a good long look at Oxygen not Included, especially if you’re reading this blog regularly - it’s perfect for problem-solvers who are looking to sync into a fun, cute, but very thought-provoking kind of game.

[fn:23] as I found out, openSUSE doesn’t come with libx264, a codec used to compress videos into mp4 formats. It is so common-place in other distros and programs many (me included) don’t even know it exists and does a fantastic job.

Footnotes

General Update

images/2019-12-29_09-24.png

I have a habit chaining in org-mode for this blog which shows me I’m passed due on updating. 4 days, to be exact. I’m rushing to update with this little post as I’m sitting here gulping coffee in my boxers before a shower I need to take… so bear with me.

I’ve been working on a post about photography for over a week. It proved to be challenging to put together because every time I put it back on my screen, there are updates and more things I want to talk about. Over the past month, there were a few good points I discussed with different people which I wanted to include, and I just don’t seem to have the time, which is the excuse I hate the most. Usually, it just means you need a slightly different angle to think of things.

So I’ve been reconsidering and I’d like to work on a video again, recording some of my thoughts. After all, part of getting the camera was recording videos and getting better at these – not to mention that I can use visual examples. So I hope to get to that even tonight.

In the meantime, I just wanted to update and tell you all I’m still here, still alive and kicking, just behind. Thanks for sticking around!

Lessons from a Huge Project at Work

There’s probably an unwritten rule (and if not, there should be) that the amount of documents needed for a project is roughly equivalent to half the number of people working on the project. Our department alone is about 20 folks, and for a recent big project we need to work with other divisions, so at this point I can’t even tell you how many documents we need to keep track of.

Usually, I use org-mode for everything. But this time I was foolish enough not to…

In the beginning, there was one email and one spreadsheet. The spreadsheet was a list of inventory items that needed to be tested; the email contained a generic workflow for the testing that needed to be done.

Problems started when I tested the workflow and found it to be flawed for our work environment. That meant I had to communicate it to the department in charge of that workflow, going through the appropriate channels. Communication, as it usually happens when there are many people involved, failed. I should have picked on the inevitable signals right then as I have many times in the past, but I didn’t.

Instead of creating notes for myself in org-mode about what I encountered, why there is a problem, what’s my workaround, who are the people I need to contact and when to expect an answer, I just brought the issue up in a meeting and otherwise kept to my own devices.

We have many collaboration tools at our disposal at work. A note in Onenote or a reminder in Outlook Calendar could have helped, perhaps a follow-up in one of our generic project spreadsheets which describes roles and issues. The problem with all of these is that they are collaborative tools which we have been over-using already.

I’m not saying there’s anything wrong with teamwork or collaboration. I work in a unit of people, and we all count on each other. Integrating a new large product into our environment which we know close to nothing about has to include other people who understand it. Communication needs to happen so we can discuss the problems and find solutions.

The choice of what to collaborate however, what tools to use, how to communicate and when, that’s where there’s a problem. that’s because one person is used to emails, while another prefers phone calls; I like org-mode while one of my teammates still prefers to write by hand; management likes meetings and generic timelines while we prefer workflows and specific solutions. You can’t have one source of communication used by 20 people and have 20 people 100% satisfied with it. You are the only person who should organize your stuff, and it is up to you to bring your points to the agreed-upon public medium, whatever it is.

That’s why there should always a draft before there’s a report, an agenda before a meeting, an outline before an essay. No one should have access to your notes because if they do, they immodestly become “contaminated” by their thought process.

I should have used org-mode and I didn’t. So, here is a reminder for myself, and perhaps for you, too. My workflow for dealing with big projects, in steps:

  1. Create a task: this is the start of any actionable item, often before I know it’s going to turn into a project.
  2. Look at the task and realize it needs to be broken further down: The breakdown can happen as I work on it or before I start; either way I record what needs to be done or was done in sub-tasks.
  3. The main task changes its keyword from “TODO” to “ACTIVE”, this means to me it’s a project. It does no longer has scheduled time, but timestamps which I stretch from day to day or week to week, as I work on it.
  4. If the project is very large or involves a lot of people and meetings, I usually create another sub-tree without a keyword called “log” and keep it on top. This log contains non-agenda dates of when each entry was entered.
  5. If a project contains a log and several sub-tasks, I will create a separate org file just for it. This will keep it in context and possibly can be used later to export should I need it to.

Usually, my projects end at step 3. I later go in and change from “ACTIVE” to “DONE” when the project is finished. Steps 4 and 5 are reserved for larger projects which are usually measured in months, not weeks.

Switching to Kubuntu over openSuSE

After giving openSuSE a try, I decided to switch to Kubuntu. So far it’s been a month or so since the switch, and everything I want works to the point that I havne’t booted into Windows once.

I ran into several issues with openSuSE and finally decided to call it quits when Company of Heroes 2, one of the few games I play, refused to work. When I contacted Linux support for it, the answer I got was along the lines “sorry, we don’t support this version of Linux.” My new Desktop Linux setup is not geared toward hardcore gaming is it was first intended when I built it, but I still want to spend some time gaming here and there. There were other issues, of course, mostly the need to download a specifically tailored software for something I needed.

With Kubuntu installed, I looked into installing windows on a VM. I ended up choosing Oracle’s Virtual Box for its better integration with the Linux Kernel (latest version) and indeed, it runs pretty well. Windows now is in a confined frame used for occasional downloads and remoting to computers at work - this completely satisfies any need I need out of Windows at this point. After a bit of a hardware upgrade which hopefully will happen soon, I wouldn’t even need to turn off the VM when I’m done.

The VM caused several “hijacking” issues of my keyboard and mouse, so I disconnected my KVM swtich and stored it away. Having a powerful desktop means my laptop is no longer in use unless I travel or need a backup. It still runs Manjaro and I think I will keep it this way.

I feel like my Linux setup is mostly complete at this point. My desktop (which I call Hedwig) is used for writing or productivity, photography and gaming.

How I Tackle Projects With org-mode

Over the time I’ve been using org-mode, a natural process of project management has emerged. For the first time in my adult life, I haven’t felt the need to switch systems of productivity. I even enjoy looking at my projects nowadays, a list of organized notes, tasks, and information. I’ve discussed projects here before, but not in this detail. I hope this would be beneficial to others, especially those new to org-mode.

This system is rather simple and straight forward, without many bells and whistles. To understand it though, I need to first discuss my methodology in org-mode.

Most tasks are created in one of two ways: through a capture template on my desktop (either at home or at work), or through a tap on the “+” on my Orgzly widget on my phone. I’ve discussed both of these a couple of times on my blog under the org-mode tag.

A task is usually nothing more than a quick reminder (“pick up clothes from the dry cleaner”). Sometimes, especially at work, a task can become more complex: (“Create a consistent guest user-template on office iMac”) which then develops sub-tasks (“Auto-launch Safari: ticket creation page”) and (“log user out within 5 minutes of inactivity”) and (“Keep same wallpaper for all guest log-ins”).

When this happens, my method is to switch the first task’s Keyword (which is now the parent task) from TODO to ACTIVE and remove the “Scheduled” property. Then, I create a sub-task under it (“Auto-launch Safari: ticket creation page”) and that sub-tasks gets its own “Scheduled” property.

Here’s how it looks like laid out:

images/2020-01-23_06-22.png

You’d notice I use agenda-visible timestamps to express the duration of the project as I work on it. This also means that the project will show at the top of my agenda [fn:24], so I see my in-progress big projects first. When all sub-tasks are done for the projects, I mark the parent task as DONE and move on.

large projects, ones that can end up taking weeks and months, are too big to fit in my weekly org file. Thankfully I don’t have too many such projects [fn:25]. When such a large project takes place, I do two additional things. First, I create an independent org file for the project. Second, I create a “Log” sub-tree at the very top of the project, before all the other TODOs, which is used as a journal for that project. The log sub-tree is broken down by non-agenda timestamps and brief titles, explaining my thoughts and methods as I work through the project. I also take a second look at the title of the project, which is often just the very first task that started it, and change it to “OP: …” something. I find that calling a large project something like “OP: iMac walk-ins terminator” instead of “Create a consistent guest user-template on office iMac” is more memorable and just plain fun.

While I have an active journal file, I find that more and more I gravitate toward breaking off journal “chunks” into respective projects I’m working on. This is faster (my thoughts about the project are right there, not in a separate journal that needs to be linked correctly) and easier to retrieve later (instead of thinking “what was it that I did…” staring at a large journal thinking what to search for, I have a project with related thoughts).

I don’t have a clear-cut requirement of switching from multi-level tasks to a full project, nor a point where I switch from writing in my generic journal to starting a log in a specific project, but I’d say it’s a mix of how much time I’ve worked on something (is it been a few weeks? Do I keep getting back to it?) and how complex it is.

[fn:24] This is done with (setq org-sort-agenda-notime-is-late nil).

[fn:25] This is more a state of mind than a technical limitation. I don’t like having large projects weighing down on my agenda because these often get stuck as I wait on other people and little progress is made. If I’m absolutely sure I need to have such a project up (for example, if we’re working on a large-scale migration at work), I try to adjust the timestamps on my agenda so I don’t see the project all the time and rather have a sub-task as a reminder to check in on something again. I find that it’s often better mentally to have a clear agenda, knowing that I have a task set to check-in on something within 24 or 48 hours than to constantly see all my TODOs stuck in red at the top.

[fn:26] As a barista, I appreciated and getting tips and I’ve considered adding a tips page to the blog before. Unlike donations, tips are small, meant to be tokens of appreciation of good content and not a way to make serious income out of the website. Unlike subscriptions (like Patreon), they are one time deals. This will encourage me to work for good content without asking for support too bluntly, which I don’t like.

Footnotes:

The joy of Taking Photos

I’ve been enjoying taking photos since October and decided to start recording videos, expanding the way I publish content. As it turns out, recording, editing, and storing videos is not as simple as I thought. The outcome has been holding back on discussing one of my biggest passions these days. I want to change that today.

I’ve already discussed my camera before my trip earlier this year. Since then I’ve learned more about photography enjoyed seeing my skill improving. Here’s one of the first photos I took to demonstrate this:

images/20200201_1.jpg

The above image is an experiment in trying to focus on an object with a 16mm lens. I was trying to see if I can get the blurry effect with the stock lenses, and I succeeded, though I basically almost had to have the fruits in the picture touch the lens to get it. The composition in this image was done a few weeks later (it’s all about cropping).

One of the elements of photography as an art form is seeing things that are not really there. It requires creativity I admire in other photos. My own take:

images/20200201_2.jpg

The figure of the hand in this photo was almost missed when I was focusing on the water drops after the rain. It captured the mood of the day perfectly: gloom and melancholy. It was taken much more recently.

I often go to a park nearby to relax and enjoy a bit of nature in the midst of the busy city. Animals are a constant sight, and as I improved the flow in my movements and became somewhat less clumsy, I stopped scaring the little ones off. First, there were squirrels:

images/20200201_3.jpg

Eventually, as I became more still, birds stopped flying away as I took my time changing my lenses:

images/20200201_4.jpg

In both the above cases, I took plenty of photos and selected these two. I find that squirrels are fairly easy to snap. They’re not really afraid of humans and can actually be quite intimidating. Birds, on the other hand, are a different story. They are everywhere, and you have to be perfectly still for a while for them to come closer.

There’s a lot in photography that is both different and similar to the work I do every day. In a way, it’s highly technical with much to learn about lighting and composition elements. In another way, it takes me out to explore, often to long walks of several hours, alone with nature and with my thoughts. I have to take my time and slow down on these walks to take good photos and I appreciate many things from different angles as I re-evaluate them in a different light, quite literally. The technical and the art mix in a way that fits well into my current lifestyle.

The images you see here are all on my Instagram, which is why they have such a bad quality. I do have the larger, high quality versions with me of course and recently started printing some of them to keep around my home.

Some of you may scuff about Instagram. Why use Facebook’s commercial platform on a site that stands for free software? The answer is reach, discovery. Other options exist, but they are nowhere close with their use-base and have a limitation on how many images can be stored. I do hope to find a better solution as I take more photos and grow in skill and content.

Let me know what you think, especially if you’re a regular reader of this site!

Best Tools for the job: Duplicati

One of the things I’ve kept postponing since switching to Linux on my desktop has been my backup system. I wrote a script on my raspberry pi to create an encrypted tar achieves of my org files twice a day transfer them over to Hedwig (my desktop), which will then find their way into an online backup provider. Now that I switched to Linux, that system stopped working, and I’ve kept postponing the critical need for an online backup too long.

I’ve been a happy Backblaze[fn:27] customer for about half a year, and I used their full computer backup plan on Windows for a flat fee of $60 a year. With Linux though, this plan was no longer an option. It seemed Backblaze was telling me “hey man, now that you’re using Linux full time, it’s time to wear the big boy’s hat and stop using our training wheels. Get serious, or go back to Windows.”

Backblaze’s B2 Backup is one of the most affordable (if not the most affordable) deals I’ve found. I had to create a script that would most likely run with Duplicity, a trusted Linux tool that’s been around for a while. After a massive project at work finally ended, I had the time to sit down and try to make it work. I started running into issues though: it seemed like authentication with B2 was rejected due to some known issue, but I couldn’t find too much information about fixing it. What I did found though, as I was looking through various help documentations, is a lesser-known FOSS tool: Duplicati.

You can tell by the game of words here that Duplicity and Duplicati are related: “The Duplicati project was inspired by Duplicity and had similar functionality until 2008,” its team says in the manual, and the philosophy is the same: a free encryption-supported tool for backups. It’s not directly available in official depositories (maybe in rolling releases?) due to its beta status, but it’s easy enough to download for Debian-based distros (this is another example of why switching to Kubuntu was a good idea for me).

Duplicity’s GUI is easy enough to understand, almost to the point that their manual or excellent getting started guide are not needed. As it turns out, connection settings for Backblaze’s B2 backups come built-in, as well as many other options like Amazon’s S3 or box.com. As a matter of fact, the GUI was so simple and intuitive I quickly created different backups for that required different intervals:

images/20200212_1.png

With these created, I was looking a my backups differently. No longer one large archive of all my system files, photos, media and org files, but a separate, purposeful and specific jobs for different kind of needs.

For example, my photos do not need to be backed-up every day. That’s because I take new photos perhaps once a week (sometimes less unfortunately). My org files are the opposite case: I work with them daily both at work and at home, and the differences are large enough for two daily backups: midday while I’m at work, making progress on my various projects, and one late at night, after I’ve recorded my day in the journal and additional events on the agenda.

Duplicati also comes with command-line options which can be used to automate similar process on different machines (I can upload my photos from my desktop, and then duplicate the process for a similar backup on my raspberry pi, for example) or to save in case you need it in the future.

It’s too early to tell, but Duplicati’ approach to backups with its ability to work with b2 out of the box is making the long-trusted back-in-time obsolete: there’s no need to backup to my external with one program and upload to b2 with another; duplicati is perfectly fine backing up what I need to my external as well.

Footnotes

[fn:27] Backblaze is an excellent service/tool in itself. For “regular” home users, they have a simple desktop tool (Mac or Windows) that backs up everything important on the computer (main hard drive and system files) by default, but easy enough to work with if you want to be more granular about your approach, including password-protected encryption. They are aggressively affordable, even more so than Amazon’s Glacier.

About Ease of Use vs Indepence

It seems like hating on Google (or Apple, or Amazon, or whatever tech giant) is the hip thing to do these days. I want to raise a couple of points to argue otherwise, and explain how I try to find balance in a monopoly-driven world.

I’ve been running Linux full time at home since last December and I’m not looking back. I love the level of control I have, the lack of nagging updates with the potential of breaking everything, the transparency in the application I use through the command line, and the overall productivity I get out of it. With the exception of a few very specific applications, I miss nothing from the Windows days[fn:30].

But that doesn’t mean I can manage my life without Google. Not yet, not anytime soon, and I think that even if I had the choice, I won’t really want to. The same is true for Amazon: it’s a giant online retailer that chews away on mom-and-pop shops, draining local business dry while more college folks move in and older ethnic groups move out. Sounds sinister, but I’m not in a rush to stop using Amazon. As a matter of fact, since I moved to this neighborhood, I started using it even more and I don’t think I’m going to stop any time soon.

You could say I’m part of the problem and I won’t disagree with you. I rather use Google maps instead of relying on hand-written instructions. I rather order something online and have it in my hands two days later than to spend half an hour in a messy pharmacy, look for a product they don’t carry, and buy a replacement for two bucks more just because I live in New York City.

There’s no lack of smartphones out there that are not made by Google, but there’s only one phone that’s good enough and cheap[fn:28] enough and has consistent security patches and is not one of those 6.5 inches monsters that I can’t use comfortably. There are few Linux phones available in different stages of development, mostly expensive prototypes that cannot compete with the apps on Google’s Play store. I’d be one of the first ones to jump ship when it will make sense.

The same argument goes for other platforms I use, such as Instagram. There are free other independent options, but they don’t come close to the exposure Instagram provides. Medium is another example: the exposure goes far beyond what Duck Duck Go brings up when tech folks look something related to this website.

So what can be done? What’s the fine line between Usability and Idealism? There are many people asking this question these days. I don’t really have a good answer, besides that I take things personally: the more of myself I invest in a certain thing, the more immersive it should be and the higher quality (to my standards) it needs to be. This is not a die-hard principle, but a guideline that seems to work.

Take this website, for example. It’s a personal blog. Having it on a platform like Hugo-Go and be able to mess with its internal parts is important. Having it integrating directly with org-mode, which I use every day for my most personal things, makes sense. Another simple example is my coffee. My coffee morning ritual is highly personal, involving my espresso machine and froth. This means I enjoy taking the time to identify a local roaster who date-stamps each pound of espresso beans I get. I know where his shop is because I was there. I even know his dog’s name.

images/20200218_1.jpg

This is the kind of thing I want to imagine when I drink my coffee at home in the morning. It’s not the same when I drink the coffee at work, or at any other huge chain-stores. Nothing can recreate what this coffee and the picture I took above.

There are areas that require improvement. Again, take my photos. They are very personal, yet, I give them away to Facebook and Amazon by hosting them on Instagram because of the exposure I need. I give up the quality and the personal connection for a service (in this case, reaching other people). When there’s a conflict like this, I promote my own platform over the public one. In this example, only photos that are a month old will show up on Instagram; the resolution is already horrible, so if someone wants to see the real thing, they’d have to come to my website, and so on. Medium and this website have a similar relationship: only old posts that are worth larger exposure the second time around get pushed to Medium, definitely not all of them. I feel this is a compromise I can live with.

Footnotes

[fn:30] The same can be said about my MacOS days. I’ve used Macs for personal usage and I keep using them at work. I have a Windows virtual box for when I need to work from home or need to use a Windows program.

[fn:28] That’s the Google Pixel a line, not the Google Pixel flagship line, which seems to be expensive just because iPhones are expensive.

Blog Update 2020-02

I ironed out a couple of CSS issues that popped up since I updated Hugo and implemented a few navigation and new page. Why and how… Coming up.

Posts Fonts, Read More

Changed to Merriweahter. This font looks better when smaller, which is what I was after. Overall font size was decreased and space between lines increased. This was done to improve how the site looks and to resolve the footnote notation issue, where the footnote indicator pushes the line further up than the rest. The “Read More” link was changed to clear, better looking down arrow, supported by Font Awesome.

Footnotes Changes

Footnotes indicators are now smaller and colored in dark red (same as title) to increase visibility despite the smaller size. The footnote back link arrows aws decreased in size and colored red as well.

Sidebar Changes

The leading text (“a blog by Josh Rollins: Tech, Life, and the Stuff in Between”) has been removed: the text just restates the obvious function of the site. A new page was added (TAONAW), a link to it and to the about page about me were added now that there’s more room. Popular Topics count increased to 5 (only topics that repeat more than 5 times show now, the rest are uner “All Topics”). The font for these sections was changed to Righteous, same as the title font.

Powershell tutorial

This is a short script that I wrote after a day of research at work. I am not a PowerShell expert, not even a rookie, but the need for automating repeated tasks combined with a good dose of curiosity and experience with some Bash automation sure goes a long way.

The script changes the network adapter’s(WiFi, in this case) DNS’ settings to specific IP addresses and suffixes to work in a certain domain(s). I will first dump the script here (with comments) and then go in depth and explain the whys and hows.

First, as what is becoming a tradition, the script itself in all it’s glory. Of course, internal IP addresses as well as my work domain have been removed:

<# 
Comments? 

can have more than one line in here
#>

# Get the instanceID from our Wi-Fi adapter, the same way it shows in the registry:

$wifi_adapter = (Get-NetAdapter -name "Wi-Fi").instanceID

# Add the DNS suffixes for our DNS settings in registry

Set-Itemproperty -path 'HKLM:\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters' -Name 'SearchList' -value 'small.domain.com,biggerdomain.com,even.another.domain.com'

# Add the DNS IP addresses to the right interface (Wi-Fi) which we got before. Note to self, need to use " and not ' when using variable in a path

Set-Itemproperty -path "HKLM:\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\Interfaces\$wifi_adapter" -Name 'NameServer' -value '10.111.111.111,10.121.121.121,10.131.131.131,10.141.141.141,1.1.1.1,8.8.8.8'

1

OK, so what’s going on here.

The script starts with a comment. In PowerShell, anything between the <# and #> is a comment. Good to explain what the script does, when it was created, etc.

Then we dive in and start by defining a variable. We’re going to use $wifi_adapter as a dead-simple indication of what this varible is going to contain. The magic start with a few PowerShell commands, which are called Cmdlets[fn:17].

Now, note the syntax:

  1. The cmdlet itself is Get-NetAdapter.
  2. We “filter” the results we get from it by using paramater. That’s the -name part. What are we looking for specifically? “Wi-Fi”.
  3. Both of these are in parenthasis. That’s because the next bit, the instanceID property, takes affect on the result of the whole combination, Get-Netadapter and -name “Wi-Fi” together[fn:18].
  4. Next, we take our expression in the parenthasis, and “filter” it for a single property, instanceID[fn:19]. Why instanceID? We will get into that soon.
  5. Finally, we take the output of this command and store it into our wifi_adapter veriable.

Whew. That was a lot. You’re probably still confused especially because you don’t understand yet why we’re doing what we’re doing here. This part’s coming up now.

2

In order to change a certain way a computer works automatically (via script) we can’t use the GUI. Things have to be done in the background, when we’re not there. In Windows, the way to make changes is through the Windows Registry. In brief, the registry is your Window’s book of instructions that make it what it is. It’s its DNA, in a way. That’s why you need to be careful messing with it: you might end up with a mutation that will result in a blue screen of death the next time you reboot.

I knew I am looking for a certain way to isolate the wifi adapter going in. that’s because the DNS changes implemented on this script are usually accessible to us via the GUI in network adapter settings. After a bit of Googling around and looking into the registry, I learned that the way the regsitry identifies network adapters is by their unique instance string.

In order to manipulate the right adapter with the script, I first had to find it in the registry. In our script, that’s done by asking the computer (using PowerShell) “hey, what’s the unique ID for the adapter that is named Wi-Fi adapter?” Or, more human like: “What’s the Wi-Fi’s adapter’s unique ID?” Change Wi-Fi to “Ethernet” and you’d get your Ethernet unique ID[fn:31].

3

Now that we understand why we’re looking for the adapter’s uniqueID and how to find it, the rest of the scrip is pretty straight forward.

We use the Set-Itempropery cmdlet to change an item’s property. In Powershell, cmdlets are usually structured in a verb-noun way that makes sense. So earlier we wanted to get the instanceID as it is in the regsitry, and now we want to set a property. Which property are we setting? We already know we want to change something in the registry, and that’s where the -path comes in.

It might be hard to grasp, but the registry is just another folder on your computer, located in Windows\system32\config. When we want to interact with it we usually fire up regedit to view everything, but regedit is just a viewing tool, another GUI that most of us associate so strongly with the resgistry that we think the tool itself is, in fact, the registry. Go ahead and navigate to the path described here using CD\ and you see what I mean.

So, we ask powershell to modify an item called “SearchList” which is inside HKLM:\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters. You can view it yoursef in goold ol’ regedit. What’s SearchList? That’s the suffixes list for your domains, which we modify with -value. The Suffixes are the last part of a domain, usually what we are used to be .com or .net or whatever, but if you’re in a big organization it can be .myorganization.com as well, as your company has subdomains.

I have my suffixes in my original script, this one has sample ones. You’d notice that the domains are seperated by a comma, same way they are seperated in the registry. We don’t need our Wi-Fi instanceID here yet, because this part is not related to the adapter; the search list relates to all of them.

The last part of the script is similar. We use PowerShell to modify a value in the regsitry, but this time it’s the specific adapter’s DNS IP addresses. We got the value earlier. We start the same w ay here, with Set-Itemproperty, and the path is more specific as we get deeper into the regsitry. Look closely and you’ll see our variable from before, wifi_adapter, right there in the path with the $ sign in front of it - which is how we tell PowerShell to throw in the path into the text. Look in your registry with regedit and it will make even more sense.

Similar to the suffixes from before, the values are IP addresses seperated by commas. You will most likely have internal IP addresses, I just used a few bugous ones - besides the last two. 1.1.1.1 and 8.8.8.8 are cloudflare and Google’s DNS, respectively.

[fn:31] Those of you with experience in PowerShell (which is not me, by the way) will probably have a problem with the script relaying only on “Wi-Fi” in the name as a way to identify the right adapter. After all, a computer can have several Wi-Fi adapters. For my specific needs in my work environment, where we get a laptop fresh out of the box and implement this script first, this works. However, I should probably narrow it down further by being more specific and identifying the Wi-Fi adapter that is running (status=up) or other means, like the manufacturer etc. I realized this as I was writing this post.

[fn:19] The Get-NetAdapter has many properties. You can view a list of those by running Get-NetAdapter | Get-Member -MemberType Property. Notice that here we run Get-NetAdapter but this time we take the output and pipe it into another cmdlit, Get-Member, which shows us a list of properties, members and methods. Since we only want properties, we ask it to show us only the type that is a property. Get-Member is a very helpful cmdlet which shows what options you have availble for other cmdlets, in this case, Get-NetAdapter.

[fn:18] If you’re coming from Bash like I do, A similar effect can be achieved by piping: Get-NetAdapter -name "Wi-Fi" | Format-List -property "instanceID". This, even though looks more complicated, might make sense to you as it did to me. The names in PowerShell are intuitive, so you can guess “Format-List” presents some sort of a list, and you can see that you’re looking for a property named “InstanceID”. You attach the two parts of the command with a pipe like you’d do in Bash. Read a bit more at step 4 and then come back here, see if this note makes even more sense then.

[fn:17] “A cmdlet is a lightweight command that is used in the Windows PowerShell environment. The Windows PowerShell runtime invokes these cmdlets within the context of automation scripts that are provided at the command line.” You can read more about this on the official Micrsofot website.

TAONAW’s First Photography Video

There are currently three unfinished drafts in TAONAW’s main org file[fn:36]. It seems as if the pandemic[fn:35] and social-distancing dried out my ideas and ability to express myself. On Tuesday I took a walk into the park and captured some flowers and birds with my camera. As it turned out, the walk outdoors recharged my batteries. I came back home and sat down to finally record my first video about photography. Here are the details about how I did it and why.

TAONAW seeks to use free technologies for content, and broadcasting video is another suck example. I was surprised at how easy it was to set up an account and upload the video to https://video.autizmo.xyz, an instant of PeerTube. While YouTube and Instagram are unavoidable to reach photography enthusiasts, I’m going to use them only as secondary sources[fn:37].

I’ve created a channel for TAONAW where I plan to keep uploading photograph-related content. There are additional options and information to add to the channel which I need to revisit. I also need to follow up with a YouTube channel, which will work Like with my content on Medium, presenting older and selective content. Since I’m not bothered by making a profit out of the videos, much of YouTube’s settings do not make a difference to me and I’ll just leave them at default. It’s a headache next to the simplicity on PeerTube.

To create the video, I used my Sony a6000, the camera I’ve been using to take my photos since last October, and AmazonBasics 60-Inch Tripod which I purchased two months ago. Copying the video to Hedwig, my desktop computer, is a breeze thanks to Gwenview which comes with KDE. I Then edited the .mts video file with OpenShot, cutting ambulances’ sirens (I live next to a hospital) and added comments in text. FFmpeg has been a tool of choice for a couple of years, and I used it here to compress and brighten up the video since I underestimated how dark my room is for the camera (learning from mistakes is the only way to learn).

The resulting video is not as bad as I feared, but of course, there’s much more to learn. The challenges I’ve encountered were not technical: the biggest one turned out to be finding a spot that doesn’t show the mess in my room; I ended up arranging my photos on my wall. Another was finding what to say: not lack of what to say, but how to say it, how to keep to the point. It is very different than writing a blog post and editing it later.

I’d love to hear comments, suggestions and general questions about the video and the topic itself.

{{< rawhtml >}} <iframe width=”560” height=”315” src=”video.autizmo.xyz/videos/embed/2ecdc35c-ad1d-4bcd-8d92-3df21a117136”></iframe> {{< / rawhtml >}}

Footnotes

[fn:37] I started my photography journey with Instargram, where I still make connections and followers, but going forward I’m planning on giving Pixelfed the main stage.

[fn:36] If you’re curious about how my blog is constructed and written with Emacs, look at the About section or read this post.

[fn:35] I always like to link to Wikipedia articles when possible, but I find myself at awe at the richness of information happening right now when the members of this dedicated community build a whole subsite inside Wikipedia just about COVID-19. If you haven’t yet, please take a look, even if just to see the list of resources and links at the bottom of these articles. So much free and available information.

Staying Sane While Social Distancing

Good morning from New York City, the new epicenter of COVID-19. This morning, I want to share a few tips that helped me cope with social distancing, keep mostly calm, and otherwise remain productive.

I’m lucky I don’t live alone. I share an apartment with my two partners, and we interact on a daily basis. There’s something about making coffee in the morning for more people, or cleaning the dishes more often because you know others need them too. I enjoy being useful to others, if you can’t tell from this blog. Conversations face to face help, and if you’re one of the fortunate people to have someone living with you, be it family, significant others or roommates, use that. Even only for a couple of minutes in different periods throughout the day. If you’re by yourself, try to schedule a video conference with friends and family.

While we all socially-distancing ourselves, this is actually a good time to meet new people virtually. I’ve enjoyed streams on Twitch (you can find people from all over the world, last week I talked to a few folks in Italy, for example) and more people were interested to start a conversation on different dating apps. After all, there’s no where to go. Interacting with new people as well with those you know, even virtually, really helps to soften to blow.

Something everyone can try to do is to enforce a routine. One of the things that keep my morning energy going is to go through a series of little objectives repeating each morning. I clean the Mr Coffee and prepare coffee. While it’s brewing, I’m checking the sink for dishes and the dishwasher for clean ones. I take out the trash. Then I go to my room and do a short series of exercises for about 15 minutes while catching on news (usually NYT, sometimes NPR). I then sit at my computer with my mug of coffee and either learn more on the situation, write in my blog, or check on my emails and work. I don’t always do all of my habits, and I often “break the chain,” but I know that if I keep it up, for as long as I do, my mood remains positive and I tend to be more productive.

Speaking of exercise, it deserves it’s own paragraph. Gyms are close, but now is actually a good opportunity to exercise more. There are many sources online that will teach you different stretches, yoga, Pilates. There are YouTube videos, quick programs from the NYT and media outlets, tips from nyc.gov (these are geared for older adults, but these will do nicely if you’re younger, just ramp up the difficulty). Walking outdoors, though becoming a more controversial topic recently, is still generally recommended. As long as you can maintain your distance (which is the key issue), jogging outside recharges your mental batteries and burn fat. Just pick up a street that is not too busy, and run slowly, which should allow you to get around people. Be generous to others and if you see a narrow fit ahead, slow down and allow people to pass. cross the street if you see a group of folks talking without masks on, which unfortunately still happens.

As an amateur photographer, going outside is a crucial activity. These days I’m lucky to have found a hobby that is mostly solitary anyway, and usually keeps me away from people. The spring is slowly coming, and this means you can encounter different flowers and birds. Here’s a picture I took last week in Fort Tryon, while maintaining much more than 6-feet distance from other people: images/20200329.jpg

Since I essentially work for a hospital, I still support our users’ machines. 80% of the work is remote, though we’re still required to come to the office and show physical presence on a rotating basis in case IT support is needed on site, which in our case can mean hospital equipment. Working remotely is an interesting challenge in itself, and when I’m not working, I’m playing video games to keep the mind busy and relief stress. There are amazing free deals on all major gaming stores - Steam, Epic, GOG, Humble Bundle. And if you just have a laptop, don’t worry, there are quite a few games that will work fine there too and keep your mind busy. I recommend Oxygen Not Included if you like management style gaming that require thinking, Darkwood if you’re into horror, Stardew Valley if you’re into something peaceful and relaxing, Thimbleweed Park if you’re into old-school point and click quests. All of these are under $30 and offer hours of fun. Other big AAA titles are also inviting right now if you have fellow friend gamers. I recently started playing Division 2 with my co-workers, online. It’s a lot of fun.

These are some helpful things that keep me going. Writing this post and hopefully using it to interact with more people is another good way to be productive and social during these times. What are some of your tips?

Old Software

What do Emacs, SSH, FTP, IRC and ffmpeg have in common?

These are all “old” programs[fn:38] that stood the test of time and are still in active usage today. Yes, ffmpeg is maybe not as “old” as Emacs, with its base back in the 1970, and you could argue IRC is barely used next to the giants of social media today, but that doesn’t mean they’re not usable.

In fact, not only usable, but often better off than modern software that gravitates more and more toward the cloud. My argument here is not that these old tools should replace the existing popular gaints of today and tomororw. Actually, the fact that they can’t, the fact that they are “frozen” in time and only work for geeks, is a big part of the benefit.

Take, for example, IRC, or internet relay chat. Using IRC requires some know-how of commands like /join or /nick and even /whois that today’s users would know nothing about. Modern application such as Slack and the like have similar commands, but these are artificial like plastic plants: a head nod to the past with often the same functionality a mouse click can achieve in GUI. Most users without technical background won’t even know these. IRC does not have a CEO behind it; it does not have a single software which is at the mercy of patches and version numbers; it does not have to abide to the whims of its shareholders and change its TOS every couple of months to another confusing legal document no one reads.

I could pick up a Linux administration book from 2007, and most of it will still be true today. Administrators, or those who think like admins, tend to prefer the proven and stable, not the new-and-flashy. The level of granular control, the options, the out-of-the box availability, the lack of storage space that is required, and of course the no payment associated, add to the stability and familiarity to a package that is hard to defeat. I could use 3 different Linux distros, a Mac and a Windows PC, and I will still trust ffmpeg to work on my video compression and editing more than I would any Adobe product. It’s just that there are far better solutions that I found.

You could argue it’s all a matter of taste, and I’d say sure, everything is. I find comfort in the terminal, in typing in commands which show me what I’m about to do, and how. I know that I could type in the same command 5 years from now, even 50, and expect the same thing to happen - if the machine these programs reside in is alive. In turn, this attitude pushes me to learn more of how things work. At first, a regular user might not understand why I spend 2 hours learning how to script something that takes them 3 clicks to accomplish; but later, when the GUI clicking needs to be repeated for 50 or a 100 items, or when they need a new setup which comes with a new workflow, they’d understand.

Footnotes

[fn:38]: Technically, IRC, ssh and ftp are procols, not software; but the idea I’m driving at is age, stability and usage. ffmpeg is a collection of software and Emacs is a software (in a way, perhaps a collection of packages in itself as well), and the same point can be applied to both of these as well.

D.D.T.C Day 55

Distancing Due to COVID19, Day 55. What are the effects?

One that comes to mind immediately is updating this blog. It was left untouched for a month (my last post has been growing mold for a couple of weeks, originally written in April). Other DDTC effects include: working from home and providing support to users remotely, lack of exercise, and “artificial depression,” as I call it.

For me, distancing started on March 17, after my partner was exposed to a potentially positive COVID19 carrier. After two weeks of socially distancing, my office was already closed with my team working from home.

I work for a medical center, which means the office has to remain open. We started a one-person-a-day rotation while the rest of us work from home. It was weird to come back to an empty office when it was my turn to be there, but perhaps weirder is the fact that by now, this has become somewhat normal.

Remote work forced us the gain more experience with different remote-support tools. Connecting to the office computers with VPN and is second nature for us, but it’s not that obvious to users who suddenly need all kinds of “required software” that they didn’t give a second thought to in the past. Even more challenging is the gray line of “owning” a user’s computer once you connect to it to troubleshoot. Most of the users just want their problem dealt with, with little concern about what you’re doing on their personal machine after they give you remote access to keyboard and mouse.[fn:32]

Not everything is work and technology (even though it seems so right now). If you ask me what’s the thing I miss the most right now, I’d say the gym. My exercise routine, somewhat trivial before DDTC, has become a concern. Going to the gym used to be a 2 to 3 mornings a week routine. Walking up and downstairs at work to show up in different offices, clinics and labs, used to compliment my exercise as well as walks to the park with my camera. Now, with the majority of these activities almost extinct, I feel sluggish and otherwise “rusty”. I can and am still taking walks, but in NYC, these often mean an inconvenient zigzag around people coming in front of you and half-circling groups of teens who still, for god-knows-what-reason (besides being teenagers I guess), insist on yapping loudly with mask-less mouths.

I’ve never been big on travel, but forcing myself to stay home as much as possible in a NYC-size apartment has a toll. I can’t explain exactly what it is besides that everything feels like it has more weight. Something about not being exposed enough to other things, bad or good, outside of this immediate second skin, has made me emotionally numb. I eat less, I sleep more. These symptoms remind me of another state I’m pretty familiar with: depression. The more I stay in, the more sluggish I am. I call this “artificial” depression because I feel it’s manufactured, not naturally occurring out of the blue as depression spells can be. The reasons are well known, and as a result, like many artificial things, it has a certain taste “natural” depression does not have. Knowing that I’m choosing to do this to myself for the “better good” seems to lost importance as time passes. You start wondering, what is the better good anyway? Better good to who? For what?

The biggest cure seems to be to leave the house and, if possible, run around the little nearby park. The running sheds always the numbness and allows me to feel more positive. I’ve also turned to personal tech projects to keep the mind sharp and things interesting[fn:33]. Let’s see what I can come up with.

Footnotes

[fn:32] The lines are pretty clear when the device is owned by the organization, but there’s a whole post to write about the gray lines of how much is “owned” by the tech when connecting to a user’s personal device.

[fn:33] There’s a lot to write here as well. For one, I returned to use Windows as my primary OS and use Linux in a VM. This is mostly (again) because of games, but also because of work requirements from home. Other projects include more scripting and automation, and re-thinking my backup system.

Mounting an SMB Share on Linux

Mounting SMB shares to Linux machines (or VMs) is an important skill for anyone using Linux. It can be quite confusing, especially for newcomers.

This guide assumes that you have two machines, Windows and Linux, on the same network and same subnet. It also assumes you’ve already shared the SMB folder with the right security properties: if you need help with this, look here for starters or find other helpful articles like this one.

Let me start by throwing the mount command at you and then explain what it does. Here it is:

sudo mount -t cifs -o user=[your Windows user] //[path/to/smb/mountpoint/
  1. We start with sudo because mounting requires root permissions.
  2. Mount is the command to mount a drive.
  3. -t to tell mount that next we’re specifying a type of filesystem.
  4. cifs is our filesystem[fn:34].
  5. -o to tell mount that next we’re specifying an option.
  6. user is our option, the user is our Windows system user
  7. next we specify the smb host and the path to the folder
  8. last, the mounting point, the folder on Linux where we’ll be viewing the files.

Here’s an example:

sudo mount -t cifs -o user=seriousam //192.169.1.12 /media/sserious/space

Notice the two things highlighted above: the Windows user (seriousam in our case), which we can find out simply by going to C:\User in Windows, and the host, which is identified by its private IP address. Depending on how you have your Linux machine set up, I find that it’s safer to use an IP address (and use your router’s DHCP settings to always give your machines the same IP address so it doesn’t change) than the using the machine’s host name.

Mounting this way comes with two annoyances. First, because you have to mount as root (using sudo), only root will have read/write permissions to the folder. You won’t be able to download files and save them to the SMB folder without sudo-ing, for example. Second, you will have to mount the share each time you log in and enter the password for your Windows user (the mount command has an “password” option, so you could type it as part of the mount syntax if you’d like).

The better and more acceptable solution is to mount your SMBs in your fstab. This is where Linux goes to find out what hard drives and partitions you got going among other things (like SMBs) and how to mount them. It might look scary, and for a good reason: mess up your main filesystem SSD, and Linux won’t know where to boot from[fn:39].

To edit fstab you’ll need sudo like all things that are system-wide and are not in your user folder: ~emacs /etc/fstab~[fn:40] (or use nano, or VIM, or what have you).

fstab starts with a nice commentary about how to get your UUIDs, which is very important if you’re trying to mount a physical disk. the first line of code specifies your main Linux file system’s UUID and then under it most likely your swap partition. Remember I told you how you can fail to boot if you mess with your fstab? Stay away from these lines.

Start a new line under the last with a comment that will help you figure out what you were doing in the future, and then specify the following:

//[host ip address/share-folder] [/path/to/mount/point] cifs credentials=/home/sserious/.smbcredentials,uid=1000,gid=1000 0 2

OK. This looks a bit like the mount command, but what’s with all the extra stuff?

  1. //host ip address/share-folder this we already know.
  2. /path/to/mount/point this we also already know.
  3. cifs the filesystem. Notice how we don’t specify a -t for type here, because this part belongs to the mount command which is different than fstab. Take a look (don’t touch!) at the Linux disks about: probably ext4 and swap, which might look familiar back from when you installed Linux and defined your partitions.
  4. credentials=/home/sserious/.smbcredentials uh oh. What’s this? fstab needs credentials to mount the share, this we know from the mount command. Since we want fstab to have everything it needs and won’t stop us during boot about some lack of access to SMB, we want to automate this process. In this particular case, the credentials are written in a file, .smbcredentials[fn:41]. I will specify what goes in it below.
  5. uid=1000,gid=1000 Remember how we had the problem that the SMB share was mounted by root, which meant no other users can save unto it? This line sets permissions to your “regular” user which most likely has user id of 1000 and group id of 1000. Run id -u to get your user id and id -g to get your group id.
  6. 0 2 I’m still not 100% clear on these. The first number, which can be 0 or 1, is for dump, the second is for fsck, which can be set to 0, 1, or 2. Dump is a way Linux backs up file, or used to back up files. It is not really used anymore (you’ll noticed your main filesystem has a 0 as well) and fsck is the order which Linux will check the disks. Your filesystem has to be 1 (and it is by default) and everything else can be 2. We actually don’t really need it for SMB because the disk is located somewhere else, it’s a matter of logging in, not checking the files, so we could put 0 there also so it won’t be checked at all.

The .smbcredentials from step 4 looks like this:

username=seriousam
password=password123
domain=.

That’s it. You now have all you need to mount the SMB share. Make sure you save fstab, restart, and see if you have access to your SMB.

Footnotes

[fn:34] The simple and short explanation is that CIFS is basically SMB. But this is not exactly true. CIFS is a blend of SMB. SMB improved with the years, and today we’re beyond SMB 3.0 which is developed by Microsoft for Windows, while Linux keeps CIFS which was also improved with the years and the kernel, so it is, in a way, a blend of modern SMB of its own and not the original CIFS from the 80s. Confused? Me too. I’m sure someone who knows more about this could comment and explain. For now, for practicality’s sake, we can pretend SMB = CIFS.

[fn:39] The solution to this, by the way, is to boot up with a live-USB just like you do when you first install Linux, mount the hard drive of the Linux partition you messed up, find the fstab, and fix it – not that I know anything about messing it up! No sir…

[fn:40] Didn’t I just say we need sudo? Why yes. In Emacs you do that by using TRAMP.

[fn:41] giving out a user name and password in plaintext in a file sounds like a terrible idea. However, since you encrypt your entire hard drive (which you better do) or your VM, the assumption is that if someone went around your hard drive encryption and found your user namhe and password to log into Linux, you’re f-ed anyway.

Benefits of Linux on VM

With the majority of work happening from home, I decided to go back to Windows. I was stubborn at start, and continue to host a Windows VM inside Linux, but eventually I acknowledged that I’ll a smoother workflow the other way around.

For me, Linux has always been good in vmware’s virtual machine. At work, I’ve been working with my SUSE VM for three years to have org-mode next to me. In a VM, the setup process always seems to be smoother than native. A VM also offers snapshots (this is a feature that comes with the non-free workstation option, but can be done indirectly on the free version by simply saving a copy of the VM files)

But before we go further, how could I ignore my strong feelings against Windows? The short answer is that I didn’t. I still dislike all the things I mentioned in Windows. Its still has a pushy, buggy, and undependable update procedure. It still keeps throwing at me a bunch of “click me!” notifications that often draw float on top of the system tray and block it[fn:42].

To fix the updates, I spent an afternoon going through the different group policies and disabling what I could to prevent Windows from messing up my computer without warning. For the notifications, I disabled whatever auto-start applications I could and uninstalled a bunch of junk. I spend my productive time (like writing this post) on the right virtual-desktop full screen.

VMware is one of Windows’ blessings[fn:43]. I set up a fresh installation of pure Debian with KDE, something I wanted to try before because of Debian’s stability. Instead of chasing the bleed edge distros I built Emacs and Hugo to get the latest versions.

Here are some more things my VM offers me I couldn’t get from running Linux nativity:

  1. Keeping my IP address on the host while using VPN on the guest
  2. Mounting SMBs for windows managed and shared NTSF drives
  3. Bitlocker & Powershell
  4. AAA Games that won’t run on Linux
  5. Work essentials: VPN, Remote support software, RDP, etc.

Yes, I know that 90% of the things I listed can be made to work with Linux or have a replacement. I could have another VM inside Linux for VPN, for example, or I could try Lutris and Wine (not to mention Steam is running excellent on Linux) for gaming. But it usually requires extra tweaking, and in the end you end up having to contaminate your productivity environment with games and Chromium, which only work to 80%-90% of their potential anyway. I know because that’s what kept happening to me. I wanted to use Firefox, but I didn’t want to bring all my work-related links from Chrome and my Google profile, so I couldn’t access those without restarting. I also had to restart every time I wanted to play Division 2 with my co-workers. This, in turn, was cutting me off from my other friends on Signal and assessing my org-agenda. In the past, I used a laptop and a KVM switch, but the laptop doesn’t have nearly as much power for ffmpeg tasks and certain things like my gaming keyboard where problematic.

As for work, it takes place on a Windows 10 machine which requires a “pure” RDP client, Bitlocker, and other applications that run only on Windows. Yes, there are workarounds, but these involved more questions than I care to answer. I don’t like nor need extra attention.

When I work at home now, I have my Windows VM roughly occupying a third of my ultra-wide screen, with my work-related websites or remoted-machine to the right. Additional office software that is needed for work is running directly on Windows, next to games and other distractions, where it belongs.

Will things stay this way? Who knows. Knowing how I am, I’ll get bored 3 months down the line and switch back to some new distro. But that’s how I roll.

Footnotes

[fn:42] When placed at the bottom, the Windows taskbar has more annoyances. I have it set on auto-hide, but when a program requires my attention, it will not go away and often block an important part of the screen. Another issue is confusion between the VM’s Linux taskbar and Windows, on top of each other. To fix this, I find that it’s better to move the Taskbar out of the way to the right edge of the screen.

[fn:43] VMware has Fusion for Mac and a Linux version as well. I only have limited experience on an underpowered Mac, where Fusion is impossibly slow, and its Linux version leaves much to be desired, so on Linux I dumped it in favor of Virtuabox. It seems to operate best on Windows.

Customizing a Theme for Emacs

What do you do when you are pretty comfortable with your Emacs theme and colors, but there’s one thing you have to change? You find the theme and you customize it to your liking. Here’s what I did.

When I switched to Debian on a Windows host a few weeks ago, I also switched back to darker colors. I looked for a matching Emacs theme and for a while I settled on my go-to sanityinc-solorized, but it didn’t do. I’ve used it at the office for too long and it just didn’t feel right. I also didn’t like the newer solorized: the headings change size in org-mode and I find that it breaks the visual harmony of everything else in Emacs. I ended up settling on Gotham, with its blueish-green colors and true-black background. The headings are all the same size too, thankfully.

One thing that stood out like a sour thumb though was the org-verbatim color font (that’s the format you get between two = signs in org-mode). org-verbatim font was a mild a dark gray, barely visible against the black background. I use org-verbatim to highlight lines of codes and commands, so the barely visible color was the opposite of what I needed.

I didn’t want to look up for a new theme, because I am fairly happy with Gotham’s colors and simplicity otherwise. Changing a single color in a theme can’t be that hard, I thought. I was happy to find out I was right.

downloaded themes from MELPA are saved in ~~/.emacs.d/elpa~. I don’t know why elpa and not melpa, but that’s what it is. I found the gotham theme folder, and opened gotham-theme.el.

Right at the top, under the first function definitions, I found the colors of the theme defined under (defvar in hex. It looks like this:

(defvar gotham-color-alist
  `((base0   "#0c1014" ,(if gotham-tty-256-colors "color-232" "black"))
    (base1   "#11151c" ,(if gotham-tty-256-colors "color-233" "brightblack"))
    (base2   "#091f2e" ,(if gotham-tty-256-colors "color-17"  "brightgreen"))
    (base3   "#0a3749" ,(if gotham-tty-256-colors "color-18"  "brightblue"))
    (base4   "#245361" ,(if gotham-tty-256-colors "color-24"  "brightyellow"))
    (base5   "#599cab" ,(if gotham-tty-256-colors "color-81"  "brightcyan"))
    (base6   "#99d1ce" ,(if gotham-tty-256-colors "color-122" "white"))
    (base7   "#d3ebe9" ,(if gotham-tty-256-colors "color-194" "brightwhite"))

    (red     "#c23127" ,(if gotham-tty-256-colors "color-124" "red"))
    (orange  "#d26937" ,(if gotham-tty-256-colors "color-166" "brightred"))
    (yellow  "#edb443" ,(if gotham-tty-256-colors "color-214" "yellow"))
    (magenta "#888ca6" ,(if gotham-tty-256-colors "color-67"  "brightmagenta"))
    (violet  "#4e5166" ,(if gotham-tty-256-colors "color-60"  "magenta"))
    (blue    "#195466" ,(if gotham-tty-256-colors "color-24"  "blue"))
    (cyan    "#33859e" ,(if gotham-tty-256-colors "color-44"  "cyan"))
    (green   "#2aa889" ,(if gotham-tty-256-colors "color-78"  "green")))
  "List of colors the theme consists of.")

Pretty straight forward. Now, how to find the piece of text which color I want to change? At the time I didn’t even know it was called org-verbatim. To find out, I went back to org-mode, found the color I wanted to change in the text, placed the mark on it, and used M-x customize face. This can customize the font-face right there, but more importantly, it told me what font group the marker’s on as its default value. That’s how I found out that text is called “org-verbatim.” Knowing that I went back to the theme file and looked up for it.

Searching for org inside the theme brought up the org-mode group inside the .el file, which included quite a few colors, but not what I was looking for. That actually made sense: the definition of org-verbatim was missing from the theme, which meant it inherited the color instead. I am not sure how the hierarchy for these groups works, but it seems like org-verbatim just picked up the generic comment text color that should have a dim, out-of-the-way color. OK then. So, if it’s not there, why not add it?

Looking at how these colors were constructed in the .el file, I saw definitions like these:

(org-sexp-date :foreground cyan)
(org-table :foreground violet)
(org-time-grid :foreground yellow)
(org-todo :foreground red :weight bold)
(org-upcoming-deadline :foreground orange)

the colors themselves, like orange or cyan, were defined earlier under the defar part I mentioned above. So, all I had to do is to add this line:

(org-verbatim :foreground yellow)

Instead of re-evaling, I prefer to save the configuration and launch another copy of Emacs, which gives me a fresh copy of the results right next to what I’m working on:

images/20200528.png Take a look: to the left, the frame with the setting before the change; to the right, the setting after the change.

Remote Work Environment

Using Elfeed to View Videos

One of the things that makes Emacs stand out is its modes (or “plug ins,” for those who haven’t used Emacs before). The nature of Emacs being open source means that every mode is born out of a need. Nothing is “fluff.” Every good mode has a good reason to exist. The more people who have the same need, the more customized and refined the mode becomes. Indeed, some of these modes are more supirior than complete softwre packages, which often costs money.

Today I want to talk about Elfeed, one of these tools. Elfeed is better than any other RSS feed readers I’ve seen. The gif below will show you why:

images/elfed-view-vids.gif

What’s so Good about Elfeed?

Those of you who are familiar with Emacs and/or Elfeed would probably want to skip this section entirely and go below to my specific tweak. However, if you’re thinking of using Elfeed and wonder what’s so good about a mere RSS reader, read on.

First, A quick explination for those of you who’re not familiar with Elfeed or Emacs. What you see is a sum of three things:

  1. I open a Window with my RSS feeds, and filter specfically for YouTube channels that were updated in the last week.
  2. Once I locate the feed I want to watch, I browse down to it and hit “v”
  3. A local media player opens and shows me my feed completely free from any YouTube ads or other attention-grabbers I’d find using a full browser.

“Wait, are we still talking about an RSS feed? Doesn’t this means we’re discussing a geeky way of reading text articles from blogs and such?”

That’s exactly where my earlier point comes in. Elfeed is not an just an RSS feed reader, No. It’s an answer to a specific need, or a solution to a problem that exists. What is the problem then? To quote its creator:

“Unlike many other feed readers, Elfeed is oriented around entries — the Atom term for articles — rather than feeds. It cares less about where entries came from and more about listing relevant entries for reading.”

To understand the issue Elfeed’s trying to fix, I suggest reading some of Irreal’s articles about it, and the post that inspired me to give it a try again. I had the pleasure of talking to the author of that blog, Noonker, directly. He had this to say:

“I consider many modern websites to be abusive and manipulative. They’re engineered in such a way to maximize ad dollars by keeping you engaged. They’re not designed for people. I really try to minimize my time on these sites and elfeed does a lot to keep me a separated from the manipulative aspects of these sites.”

This is exactly what Elfeed is good for. This, I’d argue, is also what Elfeed excels at doing better than any other RSS feeder out there. That’s because it’s inside my productivity environment already (Emacs). Its filters, customized to give me the exact update I need, are a bookmark away in Emacs (C-x C-r l) away. These means that the updates I care about are only 5 keystrokes away, burned into memory.

Case in point: during these times of COVID19, I used Elfeed to watch Governor Cuomo’s (I live in NYC) video feeds without opening my browser and without going to YouTube. Not only Elfeed helped me staying away from YouTube’s “Oh! Watch this also!” algorithms and ads, it also made sure this brifing was on my daily news dose, along with the NYC section of the New York Times and new org-mode posts on Reddit[fn:44].

The Specific YouTube Viewing Fucntion

Noonker generesouly provides his YouTube download function in the post (here it is again). I took the function and manipulated it, after watching Mike Zamansky’s helpful video:

(require 'elfeed)

(defun elfeed-v-mpv (url)
  "Watch a video from URL in MPV"
  (async-shell-command (format "mpv %s" url)))

(defun elfeed-view-mpv (&optional use-generic-p)
  "Youtube-feed link"
  (interactive "P")
  (let ((entries (elfeed-search-selected)))
    (cl-loop for entry in entries
             do (elfeed-untag entry 'unread)
             when (elfeed-entry-link entry)
             do (elfeed-v-mpv it))
    (mapc #'elfeed-search-update-entry entries)
    (unless (use-region-p) (forward-line))))

(define-key elfeed-search-mode-map (kbd "v") 'elfeed-view-mpv)

I was just comfortable enough to manipulate the code a bit after doing some reading on Elisp, but I can’t quite explain it fully yet.

The idea is that there are two functions at play: one that grabs the URL from the feed and store it into a symbol (Elisp spin on what would be a variable in other languages. Well, kind of). It also assigns the shortcut key “v” to view videos, which will work only in Elfeed (so v is free to be used for something else elsewhere).

To understand better the why and hows, I suggest viewing the video mentioned as well.

An important part for this function to me is MPV[fn:45], which immidetly beats ,VLC in minimalizim. I used Plasma’s powerful Window manager to have MPV run without a window frame at all, giving it a quick “just the vid please, thanks” interface which further enhanches what I wanted to get from the experience.

Footnotes

[fn:44] There’s more to Elfeed’s RSS reading philosophy here. Finding relevant RSS feeds is not as simple as going to a big media site and copy down their main RSS feed. This means I’d get every single article, which takes me away from staying on point. RSS feed hunting, as it turns out, is an art form in itself. Not all major website support it fully, and it can be challenging to find (or create) what you need.

[fn:45] After a week of using MPV, I uninstalled VNC. VNC is OK, but once you try MPV, you stay with it. It is lean and doesn’t have the bloatness of VNC, which can do so many things that I just don’t need it to do. It works without controls (they come up as you hover over the screen) and encourages you to use keyboard keys to skip, control the volume, etc. It probably deserves a post on its own.

Standing Desk

At first, I only noticed an increase in back pains. Then there was general restlessness that didn’t allow me to concentrate on long-term projects at work. There was a nagging feeling of “I miss something” coming from my body, not so much my brain. And then, one day at the office during my on-site duty, I pulled up my screen and keyboard to its standing position and it hit me: I need to stand.

images/standing-desk.jpeg

In the past, a standing desk was a gimmick I contributed to yoga-types folks and others who had back or posture issues[fn:46]. When a co-worker left us for another place, I adopted his Ergotron Workfit because it had two keyboard trays which I needed (I work with a Mac and a Windows workstation at the same time) and I thought standing up would be a nice “perk” since I often just enjoy walking around or talk to people while standing.

What I didn’t know is that shifting my weight helped me concentrate and became a big part of my morning ritual. The habit of standing up in the morning when I get to the office was so hard-wired that even my manager said at one point he can’t talk to me before I pull up my desk to standing position and prepare my oatmeal cup.

The Research

The research was fun. A full standing desk was out of the question because of its price and the fact that I already had a nice IKEA desk that fit my NYC-size room nicely. I knew I’m looking for a standing desk converter, a fancy term for a keyboard-mouse-screen elevator. Indeed, I first aimed for cheap options from Amazon because I didn’t think of a converter to be more than that.

My go-to website to get information for quality products when there are several options offered is New York Time’s Wirecutter. While it tends to lean more toward expensive things in general (it feels like sometimes they do it just so they have a “budget option” to add to the review) it is a good place to get a general idea. Their best standing desk converters review was helpful.

At the time, $400 seemed like a lot of money for something I could just use a bunch of boxes for, so after reading the full article (you need to keep scrolling past the “The Research” part) I carefully did some Goolging Duck-duck-going and stumble unto Work While Standing. This website seemed like a shallow review site full of ads at start, but as I read more into their reviews I saw that wasn’t the case. Their ”What is a Standing Desk Converter?” introduction was very informative and showed me what each type of converter does, what it’s good for, and why sit-stand desks tend to be more expensive [fn:47].

The information on both sites helped me realize a sit-stand converter is a three-in-one solution for me:

  1. My ultra-wide monitor’s[fn:48] stand that came with it was a piece of junk. It barely supported the weight of the monitor, and I was already searching for a better stand that wouldn’t allow it to wiggle and wobble at every slight movement, including typing.
  2. I realized that, as Wirecutter noted, the screen does need to be higher up when you stand than when you sit down (try it - you’ll see that when you stand there’s more space between your eyes and your typing hands). This has been an issue that has been bothering me for some time in the office.
  3. My narrow IKEA desk and room setup meant that I didn’t want a desk that would push toward me as I raise it. I also didn’t like how the Z-shape legs of the other converters looked like on the desk, blocking off the area under the desk in its upper position like a cage. I wanted something cleaner looking.

from there, it was easy to conclude that the Kangaroo pro model from Ergo Desktop was right for me: it included the extra feature that allowed the screen to move independently from the tray, but at the same time, I didn’t need the Elite series which included an electric motor for it.

The final tune-in was the measurements: turns out that the Kangaroo Pro Junior’s depth was the perfect fit for my desk; the bigger Pro would be too wide, with its tray pushing out toward my chest and covering the drawers of my desk. The desk had a clean, industrial look to it that I liked. I also found a coupon online just before I placed my purchase, reducing the cost of the desk by $20. Shipping was free of charge. Nice.

Set Up and Some Hacking

The Junior Pro came in a large, flat, heavy box. Kind of like a flat TV screen. Because I don’t trust USPS to ship anything to my home address (that’s a long story) I shipped it to my office instead and took it home from there. I’d say I’m in a rather OK shape and I handled the box OK, but I wouldn’t recommend carrying the box a long-distance or up the stairs without help. It was just heavy enough to be a bit much.

Assembly was easy enough. There weren’t many parts, and the instructions were clearly printed in full colors (you can also view the PDF on their website). The only tough part was the monitor: because of its size, it was difficult to hold it with one hand while adjusting the stand so the screw holes fit just right. This is also when I ran into more problems with my screen.

As it turns out, my ultra-wide monitor has its VERSA screw holes a bit lower than they should be. The Kangaroo’s screen-tilt lock doesn’t provide much friction as is, and it was not possible to lock the screen at a straight angle. It kept tilting downwards, hanging on the adjustable tilt screw. Another issue was the screen’s HDMI socket location, straight in the back, where it hits the metal shaft of the stand. This is a known issue with the monitor judging from the complaints on the Amazon page, where people ran into this issue while trying to mount the monitor against a wall. In short: this is a problem with my monitor, not the new desk.

I fixed the problem temporally by placing Styrofoam between the screen’s mounting stem, which was sticking out a bit, and the metal shaft of the screen raiser. I contacted Ergo directly about my issue and got a reply the next day: they were shipping me another screw which was supposed to replace the tilt break. After playing around with it I found that it was better to use the screw they sent as a stopper, which fit in one of the grooves of the metal shaft, than a replacement for the break as they suggested. This locks the screen tilt in one position, but I’m fairly comfortable with how it stands now and I don’t need it to move. The result is a wobbly-less typing environment both for standing and sitting.

A few More Words

Getting a standing desk converter is easily the biggest upgrade for my work environment in recent years. At start I didn’t want to spend so much money on something that seemed to be a “perk,” but the quality of the product, the way it makes the whole room look, and the inviting feeling I get every morning when I stand at it with my fresh cup of coffee are priceless. Still, this is a highly personal choice. I can’t recommend this standing desk solution for everyone: too much depends on individual quirks, measurements, budget, and more.

Of course, This post wouldn’t be complete without praising org-mode again. A picture is worth a thousand words, so here, have a look:

images/standing-desk-research.jpeg

You’re only seeing half of it since my thoughts (under the closed headers) are just too long to fit here. org-mode made it not only easy to write down details as I was browsing and reading but also kept me organized with my thoughts and prevented me from going back in loops over conclusions I’ve already made.

Footnotes

[fn:46] An interesting irony, I realize: I am one of these folks myself.

[fn:47] That’s because sit-stand converters effectively include a stand for the screen. Other converters are essentially just a raised tray on which you also put the screen with the keyboard and mouse; with sit-stands, the screen mounts to the back. Better models include independent raising mechanism.

[fn:48] Probably another post in itself, an ultra-wide monitor is another must-have in my setup today. It is not as heavy and messy as having two monitors while providing much-needed work space, not to mention good gaming experience (which is horrible on a dual screens in my opinion).

Refactoring CSS

There’s a popular saying in IT: if you receive no comments about a project you just finished, you know you’ve done your job right. The latest commit to TAONAW is one of the biggest I’ve ever done, but I hope you’d barely notice anything.

One of the issues I wanted to fix in TAONAW was the mobile layout. The original theme, Hyde, comes with a narrow theme variant aimed at smartphones and tablets. On TAONAW, it was always a broken mess of inherited CSS rules coming from the wide (desktop screens) variant, which was what I always focused on. The narrow theme became even worse after I had my logo made. The logo covered the sidebar almost completely, making navigating the site on smartphones practically impossible.

I’ve toyed with my CSS here and there, but I knew I need to go deep to fix the narrow theme. The different CSS tricks and hacks I used made the site work through workarounds, but overall, it was still a mess. This week I finally got some time off for serious refactoring[fn:49].

When you narrow the browser’s window now (go ahead, do it) it will switch to “smartphone” mode when you reach 800 pixels wide. I tested the site on both my iPhone and Android and the site looks as intended on both[fn:51]. The logo, which now fits better on the wide version next to the title, also sits well at the lower right corner on narrow screens. Shrink the screen more and the logo will slide left to accomodate (eventually it will overlap the text of the navigation bar, but I don’t believe there’s a screen that narrow intended for reading.

The CSS (in the hyde.css file) code now exhibits common code for both themes at the top, and then under it, the wide version followed by the narrow version. For example, The title font for the site is the same for both themes, so it will be defined at the top; however, the font is a different size on the wide version (4em) than it is on the narrow version (3em). This is one refactoring example of many. I also took out large chunks of unnecessary code that came with the original theme.

There are many examples I can point out and expand on, but I won’t get into these right now. If you’re interested, the site and the theme is availble to all on github, where it’s been since day one. For me, the biggest satisfaction was learning all the new CSS “stuff[fn:50]” and having an organized CSS I’m familiar with to keep fixing and adjusting the site in the future.

Footnotes

[fn:49] Usually reserved for “serious code,” after working a couple of hours on CSS, I think the term describes well what I’ve done here.

[fn:51] Like other projects of similar size, you only see additional issues after you’re done. A few tweaks to the narrow theme still need to be done, but now it will be much easier to get to them.

[fn:50] When I think of it, CSS is immensely useful. It can be used to present anything (through a form of an HTML page) rather it’s my wiki built-in Emacs org-mode or a document following a theme turned into PDF.

Back to basics: org-mode agenda

Since I set up Elfeed to suit my needs, I’ve been following the org-mode community on Reddit regularly. Many of the questions I see there from newcomers show a rush towards custom-made packages or bits of code without awareness of the powerful built-in features org-mode ships with out of the box.

I want to start a short series of “back to basics.” I hope to show the philosophy behind the certain flow I use as well as the plain and powerful features.

The Agenda

One of the first things to reconsider, especially to the newcomer, is the agenda. To quote orgmode.org directly:

As its name suggests, the most common use of the agenda is to pull together, from all the agenda files, a daily or weekly schedule or a list of todos. But the agenda is also a powerful search engine that offers various ways tools for querying both the metadata and the text of org-mode entries. In fact, Org-mode’s default agenda view (C-c a a or org-agenda-list) is simply a search tool that gathers and displays all org-mode entries with certain types of metadata—timestamps that fall within a given range of dates.

This is probably one of the most overlooked ideas. Everyone knows of the agenda, but many underestimate how powerful it is. A newcomer comes in and asks how to do something that the agenda already provides, and someone will suggest a package from melpa or a bit of code, but the “vanilla” version provides what’s needed right out of the box.

Take the concept of the agenda as a search engine, and suddenly many of the built-in powerful commands make sense. For example, C-a s for Boolean search or regex through all your .org files, for example[fn:52]. Or C-a T for a specific Keyword, which I use often for my projects, marked with “ACTIVE” (one simple search and you see a list of your active projects). I use these powerful commands 90% of the time and hardly look anywhere else. I highly recommend the wrog page for this, which has a lot of details for searching and helpful examples.

Organization is important, so let’s discuss categories (I wrote about this before). The idea is to group certain org files together to reflect one major “life area”[fn:53]. One obvious example, and almost the only one I use, are my personal and work categories. I have a “task” capture template for tasks that I use often and capture into a temp org file. I then refile these tasks under my “Personal” or “Work” headers on my weekly org file. All that is needed is the category property. Not only this organizes my agenda quickly to work and personal projects and tasks, it’s also easy to focus on one category quickly in agenda view by hitting < when standing over it with the mark. One key press and you see only work-related stuff; one more and you back to see everything. Of course, you need to understand org-capture and use it generously. I’ve wrote about this a couple of times, if you’re interested (look under the orgmode tag).

Keywords is another basic feature in org-mode that can do much more for you besides just showing you to-do items. You can use them to help you understand what kind of a task is ahead of you without any additional fancy code. I already mentioned my projects get the “ACTIVE” keyword (I wrote more in depth on it before), which is different than just a task that is marked by “TODO”. I also have “JOURNAL” and “WIKI” keywords which are pretty self-explanatory. The WIKI one is especially useful if I want to remind myself to document a process I’ve done. I can later search for all my wiki assignments on the agenda (remember the C-a T from before?) at the end of the week or during downtime at work and do them all at once. If you find that you’re dealing with a kind of a to-do process, consider expanding your keywords. Here are some examples that might be useful for you: “brainstorm”, “meet”, “discuss”, “wait”, “pending”, etc. The manual suggests (I can’t find where now) using keywords for people (such as “John” and “Jane”), if certain tasks require that you delegate what you’re doing. Remember that these are searchable and very visual. Use it for your advantage.

So far, I discussed features that help you search and find things on the agenda, but lets switch gears to productivity philosophy for a minute. Here’s a “mindhack” that has been useful to me: stop thinking of scheduling a task as a way of when to do something, but instead, how important is that to-do item. I don’t use the agenda priority feature. Instead, I just schedule important things for the early part of the day and things that are less important or optional for later in the afternoon. It’s very easy to re-schedule tasks (as easy as standing over it with the mark and using > as a shortcut). Remember: the agenda is not an agenda. It’s a search engine. It’s OK to have 20 tasks on it scheduled throughout the day, it doesn’t mean you’d get to all of them (and you shouldn’t).

So if you don’t use scheduling, how do you know when you actually work on something? Let’s bring up another mindhack: clocking in and out of tasks. Many folks say it’s overkill or they just forget to clock in when the project starts. Most of the tasks I clock in for, I clock after the fact. Why? A few reasons. First, because I schedule tasks as a way of priority, many times it happens that a task I started today I will continue tomorrow. If I clock my time for the task today, it will show on my agenda (press l to see clocked in items in agenda view) today even though it’s scheduled in the future. If I just checked on something quick (say, see if a client has responded to an open ticket, then I see they didn’t), I just clock in and out quickly, creating a 0-time instance that still shows on the agenda. This way I know I checked in on the ticket today, even though the task still reads “check on ticket” - which is what I need to do the next day again. By the way, did you know you can clock in by pressing I and out by pressing O? Place your mark on a task in your agenda view and try it.

I especially clock in on projects after I’m done working on them for the day. This gives me a good bird eye view at the end of the week to see how much time I spent on a large task. To check how much time you spend and add all the time clocks, stand on tasks header in the org file (meaning, not in the agenda. Hit enter, go into the header and hover over it, and then) and press C-c C-x C-d. It will tell you at the bottom. If you stand on a parent header, this will also show you the sum of the sub-headers, which is useful if you have a big project broken down to assignments (as you should).

What about events that are actually scheduled for a specific time, like meetings or dates or phone calls? Easy. Use timestamps without the “Scheduled:” word in the beginning. You can define a time range also if you write it out like so: <2222-07-19 Tue 08:00>-<2222-07-19 Sun 10:00>. Don’t underestimate this. Headers with these timestamps will be colored differently on your agenda, making it easy to distinguish between actual meetings and your scheduled tasks. These are also the kind of events that you can export easily to other calendar apps like Google calendar later on. What I do sometimes is still add a TODO keyword to these events, as an indication that I have a checklist waiting under the header. So if I place a header like “Meeting with the boss” with a TODO on it, I know I have a few items to review before the meeting, or perhaps points to bring up during the meeting. Small details like these can be very helpful.

There’s more to cover but I should leave some things for next time. I hope this is useful also to those of you who’s been using org-mode for a while (if so, what did you learn? What can you teach me?)

Footnotes

[fn:52] Think about this for a minute: a full regex search for all your org files. This is so powerful! You can search for something as simple as “Bob’s Cafe” to bring up all “Bob’s Cafe” results, but you can also use something like +Bob -Cafe|-cafe to find any org files that have the words Bob and not Cafe or cafe (search is case sensitive). You can go full regex with the search too if you want, which might be useful if you’re trying to find a specific location you know you visited a year and a half ago and wrote about it in your journal.org file and all you can remember is rambling about the great beer the had but the awful music was horrible, and it was cold, and…You get the point.

[fn:53] The idea of “life area” is something you need to sit down and define. Categories are not tags, they are meant to be very general. Think of categories as routines or certain “bracket” in your life you spend more than an hour every day during the week. Things like “Work” “Exercise” and “health” make sense, depending on what you use org-mode for; things like “projects” or “vacation to Aruba” do not - these are too specific and would probably be better used as titles, keywords, or perhaps tags.

Full content in RSS feeds

Those of you who subscribe to this blog via RSS (if you don’t, you should) should now be able to read full content in RSS.

Hugo comes with a default RSS template that generates RSS for you out of the box. You can create an RSS template in your site’s /layout/_default/ directory. Create rss.xml and use your own code, or if you’re like me and just want to change something quick, use the template above and copy-paste the contents from git-hub.

To allow full posts on the RSS feed (default is only the summaries, the bits you see on the main blog page), change the “Summary” default to “Content” toward the end, in this line: <description>{{ .Summary | html }}</description>. This is a nice tip I discovered at GODO.DEV.

Next, need to add RSS link at the sidebar for anyone else interested and doesn’t know about the blog’s RSS. This is now done, though rather ugly. Some CSS work and possibly include other links as well.

Keeping a healthy routine during COVID19

It’s been four months since COVID-19 started for me. I’ve been working from home, which means far less exercise than usual, especially with gyms closed in the city. Thankfully, I have a system that helps me to keep somewhat active, and today I want to share it with you.

Nothing I’m going to tell you here is going to be groundbreaking. I’m pretty sure I picked up bits and pieces of ideas I’ve heard in various places and glued them together to something that makes sense to me. Still, or perhaps because of that, it’s pretty effective without being too obnoxious.

The Elements

At some point in my life, I started basing what I consider to be healthy or wholesome living with the “elements”. Those are Sleep, Exercise, Nutrition, and Meditation. I believe that if I practice these four disciplines I can remain mostly healthy: mentally and physically.

For sleep, my goal is 6.5 hours a night. I often sleep less (my average is about 5.5 hours) so I try to make it up with a nap (20 minutes or so) when possible, usually in the afternoons. This doesn’t always work, but when I take one of these power-naps I’m usually able to be healthy afterward.

“Exercise” is an umbrella term for several exercise routines I picked up with the years. The idea, especially during COVID, is to keep things simple. I try to jog around a nearby playground at least once a week (goal is twice), will keeping push-ups, crunches, squats, dips, etc. at home in periods of 10-15 minutes. I try to include stretches and/or yoga poses as well. This sounds like a lot, but I just do a few of these each time. Keeping up a routine is the single most important thing here.

As for food, I keep things as simple as possible. If a food item has ingredients I don’t understand or can’t pronounce, I try to stay away from it. Everything today has preservative, stabilizers, acids, and of course, sugar (it’s crazy how there’s sugar in almost everything!) but I try my best. For example, I choose fresh vegetables where possible over anything frozen, processed, etc. I work in a large medical facility, which makes eating healthier somewhat easier. Water is my main drink of choice, seconded only by coffee and tea.

The most elusive element in the list is meditation. I often struggle to remember to slow down and just breathe. I have a very specific breath technique involving counting which helps. I’m pushing to keep a routine of 5 minutes a day.

My Scoring System

The scoring system is what makes everything move during COVID-19. It’s what keeps me motivated and allows me to improve. Here is how it works:

Every morning I get up, the first thing I do is to make fresh coffee. Then, as its brewing, I go to my journal and write about yesterday’s elements. I reflect on what was done and give myself a score. For example, if I do my push-ups, I get 1 point. If I eat home-made salad, that’s also a 1. But if I decided to party and have pizza with beer that day, I’d give myself a -1. Some things give more points: running is challenging (especially in this heat) so it’s worth 3 points, while each set of push-ups is half a point. The system is dynamic and I change it according to what I need to work on. For example, if going to bed at the right time has been working out for a couple of weeks, it’s time to move on and get points for, say, taking these power-naps I mentioned above.

Right now, an OK day is 3 points, a good day is 4, an excellent is 5. I try to not go too crazy because too much means the next day or the day after that I will not have the motivation to do much. Aiming at the same rate of 3 or 4 points every day is the main goal for now as a healthy routine.

The Fitbit advantage

My Fitbit Ionic has never been more useful to me. When I went to the gym before COVID, I’d let it detect exercises automatically and never bothered with it too much. Now though, where an opportunity for an exercise means skipping the elevator and taking the stairs, it’s really helpful to see when I was active and for how long.

The Ionic is also pretty good at detecting when I fall asleep and for how long. I find that it’s accurate within 5-10 minutes. It’s useful if you’re like me and you need to be dynamic but still have a routine.

The only point is to get better

When I started picking up on running again I was way behind what I used to be able to do in the past. I couldn’t even finish a single mile, I was constantly out of breath, and I was very discouraged. The scoring system helps, but only if you use it to reinforce the main goal: to get better.

If all you can do is 10 push-ups, then start there. Any good coach on earth would tell you the same. 10 push-ups a day this week means 12 push-ups next week. Whatever you do, is better than doing nothing. That little you do will slowly build up and develop your routine.

It’s just as hard to go from 10 push-ups to 50 as it is from 200 to 500. That’s because it’s all relative to what you can do. If I score my results honestly, I see I’m making progress.

Back to basics 2:

To really get back to the basics, we need to go beyond org-mode and into Emacs. And in order to get into Emacs, we need to talk a bit about Linux. And in order to talk about Linux, we need to talk about the philosophy of Linux. This is a whole lot to discuss, and I won’t even scratch the surface of anything, really. What I’m going to discuss here is my purely non-objective opinions that lead me down this path.

Think of your productivity as a spear. The spear of productivity. The whole spear is Linux. Emacs then is the spearhead. At the tip, the sharpest point, is org-mode, the tool of tools:

images/2020-07-25.png {{< rawhtml >}} <figcaption>The Linux spear of productivity</figcaption> {{< / rawhtml >}}

The Spear: Linux

Linux, like the speark, is lean and sable. You can lean on it for your work. As a person who uses macOS, Windows and Linux on a daily basis, my experience is that only Linux is predictable and stable (Windows is the worse offender) but also minimal and customizable (macOS is generally harder to customize).

This is important because your productivity starts with your environment: a clean desktop, no extra notifiction popping at you from all direction, and the ability to change how it looks and acts exactly the way you want it. I use KDE’s plasma desktop environment, which is the most popular today for Linux. It “borrows” tools from macOS and Windows (macOS’ hot corners + Window’s start menu, for example).

My path into Linux precedes my quest into Emacs org-mode by a few good years, and I’m happy it did. I can’t imagine emacs in another enviornment, though for some folks, it works well inside macOS (I haven’t heard from anyone who actually likes emacs in Windows - they set it up with it because they have to).

Without getting into the deep technical here just yet, here are some of the features I appriciate about Linux for my productivity environment:

  • Minimal interface with applications that offer as little “fluff” as possible
  • Once you download an app and use it, it’s yours:
    • No forced updates that will break it (macOS Catalina)
    • No payments and subscriptions (Office 365, Adobe)
  • When you learn and use the command line, the workflow is the same for decades
  • Plays well with other systems, Windows or macOS. Opposite is not as true.
  • Set it and forget it: scripts to automate your workflow will work for years.
  • Cheap: you can afford the best tools and can learn to use them.

Spearhead: Emacs

In the past, I used to consider my non-coding background as a disadvantage when slow sank into Emacs. But code can be learned. Philosophy needs to be discovered.

Emacs is

  • If Linux is the prodcutivity environemnt, Emacs is the swiss-knife for everything almost done in it.
  • Org-mode is what pulled me to Emacs, the tip of the spear of Emacs.

Of Emacs and Rabbit Holes

I mentioned a “back to basics series for org-mod, but I got stuck. I thought this would be a good chance to record webcasts, especially for newcomers, but recording videos turned out to be harder than I thought for various reasons. I gave up for now, after almost 10 failed attempts.[fn:55] With this out of the way, I got pulled into another area of interest that seems to be everywhere in org-mode circles these days: Zettelkasten. Something clicked.

Zettel… Who?

Instead of giving you a half-baked explanation, go here. Read what the original thing is about, and then search online to get an idea of some of the implications, including org-roam. Done..? OK. Now I recommend Karl Voit’s excellent post about the topic, which I find myself agreeing with. Now you’re where I was about a week ago when I did the research.

How to Not Go Back To Basics

One of the products that seem to be everywhere is org-roam. Org-roam is based on roam, which is based on a digital implementation of Zettelkasten. It is becoming very popular in the last couple of months, enough for folks to try and embrace Emacs just because of it. Indeed, some of the popular videos out there show how to set up and work with Doom Emacs (Doom Emacs seems to be the shiny new things these days also) set up with this new Zettelkasten method and org-roam.

Call me an old fart (I find that I refer myself as old fart more and more these days. I think it’s starting to grow on me), But I refuse to drink the Koolaid. I’ve looked into the videos and the images, compared with what I could learn of the original method (which was analog: based on paper index cards) and realized that essentially the entire system is already potentially implemented in org-mode. The quick setups and videos seem to be directed to newcomers who did not org-mode for long and look for a magic bullet for their productivity problems. I know because I used to look for such magic bullets myself, and Emacs org-mode is the last application in a long list of such products.

So instead of trying to implement org-roam, I decided to take Zettelkasten apart, see what makes it work, and implement it into my existing workflow in org-mode instead.

Emacs org-mode Wiki, ZK Style

After researching Zettelkasten for the last few days, I came up with the following strength in this system:

  1. The original system was implemented on index cards. It forced Luhmann (essentially Zettelkasten’s prophet) to keep his notes short. Keeping short notes is good:
    1. It reinforces bullet points and focus
    2. It helps to retrieve information: a glimpse instead of pages of text.
  2. Tags were not used (as far as I can tell) needed: too vague[fn:54].
  3. Instead of tags, related ideas (notes) are connected by links
  4. The links are bi-directional: A ► B and A ► B.

After writing my conclusions down, I had the idea of what I should do about my going back to basics series.

Instead of recording new videos or writing new posts from my current point of view today, I can go back to my old wiki from the days when I started my quest. Originally more of a learning experiment with AWS, it contains a few helpful links and tips about learning org-mode Emacs from a point of view of someone who didn’t even know what it was.

With what I learned of Zettelkasten, I hope to re-create it using the already existing text in org-mode. The short entries of the wiki are already constructed in a form of index cards. Now I have the chance to practice organizing knowledge with my existing tools using the few points I learned from my research.

Footnotes

[fn:55] It also occurred to me that a talking-head kind of thing (where you see the person’s talking to you in the corner with the main content being the big screen) doesn’t really feel “me.” I’ve always been more of a writer, and when there’s a need for a visual, animated or not, I have the tools for that. I think I’ll leave a web series to those who feel more using them.

[fn:54] org-roam does use tags, and org-mode definitely has them. While I believe that as a rule of thumb tags should not be used, they can be helpful if they’re specific enough. This can be tricky.

My Impressions of The Pixel 4a

I’ve received my new Pixel 4a a Friday before last and after a week I can say this with confidence: this is the best smartphone Google ever made. I also hope it will be the last phone I buy from Google.

If you want to read more about the Pixel4a, There are plenty reviews on the web, both written and videos (here’s a recent in-depth one arstechnica, and a quick visual one from the Verge). Instead, here I’m going to focus on why I find this phone so good.

Hardware: The Phone

Let’s start from the obvious: the price. At $350, Google is back where I think it has always been best: supplying top tech at a price that doesn’t twists people arms. At hard times like this, with folks losing their jobs, this is a welcome change from the $800 plust flagships.

The Pixel 4a is not a statement of class like the iPhone, and it’s far from delivering the latest and greatest. That’s fine, because you don’t need the latest and greatest, you just need a damn smartphone. The iPhone 8 Plus, which I have for work, still sells for about $500 today[fn:57].

The iPhone 8 Plus has always been too big and too heavy for me, yet somehow the screen on it is smaller than the one on the Pixel 4a. This makes a big difference for me. The Pixel doesn’t fall out of my hand when I pick it up like the iPhone 8 plus does. I can reach the corners of the screen without doing crazy hands acrobatics. It doesn’t stick out of my pocket for everyone on the street to see. It fits nicely into my pouch when I go for a run. These may look like insignificant details, but the overall experience is liberating.

As a bonus, the pixel comes with an earphone jack. I can carry my flac files with me on the phone and listen to my higher-end earphones at the office now, especially since the phone has twice the space than my old Pixel2. The Bluetooth headphones are good for meetings and streaming music, but if I want to get lost in some good stuff, I need an audio cable[fn:58]. It’s also nice to plug in my good set and charge my phone at the same time when I’m at my computer.

images/2020-08-28.png {{< rawhtml >}} <figcaption>iPhone 8 Plus and Pixel 4a side by side</figcaption> {{< / rawhtml >}}

…And That’s it. Yes, I know the Pixel 4a is all about the camera, the better battery life, lack of better water resistance, Android sucks/rules, whatever. Doesn’t matter to me. One more thing. While I don’t particularly care about looks, the Pixel looks good. The edge-to-edge screen without bezels, the crisp image, the fabric cases to match, it puts the iPhone in perspective.

The software (Android)

Android 10 is part of what makes the pixel 4a so great. I used to marvel at Google’s utilitarian vanilla Android and scoffed at anything that wasn’t pure android experience. Now, after a couple of years in IT working with Android, iOS, macOS, Windows, and of course Linux, I’m moving forward. Or perhaps (depends on your view) backward.

It’s called GrapheneOS. A mobile OS that is meant to strip the Pixel out of what makes it Google. One of a kind mobile OS, built on top of Android, meant for privacy and security geeks who want a smartphone that doesn’t call back to the mothership. I want to wipe this phone clean and see if I can live without Google on me all the time.

But I’m cheating. I still have my iPhone. I’ve decided that all the apps that need to “spy” on me can be on my work phone. Bank app? I don’t mind if my job, which pays my salary, sees my bank statements. Google Maps? If I travel, I can’t leave work completely behind anyway. Phone calls? I barely call people anymore and most of my contacts are on iPhone, so I don’t mind using Facetime. Everything truly personal however will live on the Pixel: my org notes in Orgzly, my contacts on Signal[fn:56], my media, passwords, etc.

Right now, this is mostly a challenge I want to take. I still have my old Pixel 2, and I’m planning to use it first as a “demo” version to see how this works out - or fails. By the time I’m done I believe GrapheneOS will already have a working Pixel 4a version.

Why? Well, why not? This blog is called the art of not asking why, after all.

Footnotes

[fn:57] This price came up for a refurbished iPhone 8+ on Apple’s website.

[fn:58] Interestingly, I find that I don’t need my amplifier when I connect directly to the jack. With the USB-C dangle, which looks horrible and prevents me from charging the phone at the same time (so it slowly runs out of juice), I need to use an AMP as well. So imaging this: heavy audio cable from the headset, into the adapter, into the AMP, into the phone. I want to say there’s a slight difference in sound quality without the amp, but I don’t have a professional-grade headset to detect that sort of thing.

[fn:56] This will be tricky since Signal is not exactly open source, but I’m aware there’s a way to install the APK anyway. Actually, Signal puts out their APK directly on the website.

Org-mode and Zettelkasten

After almost two months of thinking of the idea, I decided to give up the idea of a newcomer’s wiki to org based on Zettelkasten. I might revisit Zettelkasten in the future, but I don’t want to be bound by arbitrary time constraint of a project.

(Not) Fixing my Wiki

As of now, my wiki is a single org-mode file containing approximately 50 headers, and those are often broken into sub-headers with instructions and examples. All in all, I probably have some 100 headers to go through. Giving each one of these a hierarchical custom ID (as Zettelkasten dictates) is a huge time suck which isn’t justified for my workflow.

Another problem: the wiki org file is somewhat broken and needs fixing. At one point I thought it was a good idea to sort the headers alphabetically and I did not use the right org-mode sorting technique. The result was that certain segments and headers got mixed. I can read one topic and suddenly an unrelated paragraph appears out of order. Since this issue is nested deep into topics I don’t read on a regular basis it’s hard to locate and find. I haven’t discovered this issue weeks (or maybe even months) after it happened, so restoring from backup wasn’t an option.

Yet there are still more organizational problems. The images of the wiki are nested in a sub-directory and lack a good naming convention which needs to be fixed before I attempt to fix image links from absolute paths to relative paths, which is needed for me to view the wiki on my phone.

Finally, another issue is my writing style: Parts of the wiki are personal and written in first person, journal like, while others built through a template that is meant for public viewing. Writing a wiki for readers means a lot of rewriting.

Some of these issues I am fixing slowly while others are simply not worth it. I am not going to finish rewriting my wiki any time soon

Perspective in Bullet Points

After wracking my brains trying to adjust my org-mode system to fit a bit better to Zettelkasten I realized something: I don’t need Zettelkasten, it’s just a collection of ideas that seem cool to implement. As it always turns out for me, org-mode is designed to do exactly what I need it to do with minimal modifications.

Zettelkasten, at least originally, doesn’t place much importance on the order of headers or the hierarchy as long as there is one in place. As long as you have headers and sub-headers, all inter-connected in relationships to one another, you’re fine. Part of the beauty of this system is that you don’t need to worry where you place a new concept in the system, you just do, and you connect it to your existing ideas in your network.

This is a good philosophy, and one which I disagree with.

My hierarchy is crucial and my headers reflect categories that make sense to me. For example, my wiki is divided to roughly 4 major categories: work tech, personal tech, personal stuff (non-tech), and the security plus exam. That last part is mostly in the past now, just collection of notes I may or may not find use for. Then, each of the sub-headers usually reflects a specific program (if in tech) which helps me narrow the issue down further. From there, it’s usually easy enough to search through my notes with Occur or Swiper, depending on how fuzzy I need the search to be.

Say I need to reflect on something like capture templates syntax in org-mode. I know this falls under personal tech category (the “crazy” dude at work, with a weird organizational system no one else uses..? Yep, that’s me) so right away I narrowed it down to 25% of the wiki. That’s a lot. Emacs is a huge topic in the wiki under personal tech, which is the first place I’ll look. Then, because org-mode also contains several headers, I’ll locate org-mode and go there. If I can’t see the capture header at this point, I’ll search under the org-mode header for something like “capture” or “capture template” (one space) which, in Swiper, is regex search that is similar to the boolean “capture AND template”. I’ll have what I need in a second.

What about something more vague? Say my boss asks me to find out if a few failing hard drives are under warranty. I know I need to get the hard drives’ serial numbers using a script I used a while back, but I don’t remember anything else. OK, no problem: it’s probably going to be under work related category, and then, I’ll just search for “hard drive serial”. Turns out there are no results, but I know for sure I wrote about it. I look at the search, and I remember I almost never write out “hard drive” but hd or hdd. I try again and bingo, I find “get HDD serial”. It’s under work related, which I knew, and under Powershell. Aha, of course it was Powershell! Now I remember. I find the script, and even more, there’s a journal reference (in the wiki, it just says “see more in the journal” with the word journal being a link, nothing fancy). Now I have the technical and, if I want, I know to look at my journal org file (which is even bigger than my wiki) around that date to see my reflections on the topic.

I also have “see so and so” links which lead to other area of the wiki, something I implemented before I’ve heard of Zettelkasten and continue to use where needed, not because I’m supposed to or because the system I use says so. In other words, I kind of used Zettelkasten before I even know what it was. Creating such links, by the way, doesn’t require anything fancy like org-roam or even custom links: a simple C-c l grabs the link of the header you’re on, and C-c C-l would suggest pasting it for you automatically. This is org-mode usage 101, straight out of the box, no package needed. In case these links break, as they do in a dynamic wiki, the broken link usually offers enough hints for me to re-link it. This doesn’t happen often enough to worry beyond a manual fix when needed.

There are Many Other Great Sources

So far, I’ve discussed how I don’t want to modify my wiki and that I don’t really need Zettelkasten to find stuff. The other thing is, there’s little need for yet another “noob guide” for people new to Emacs. Most chances are that they won’t find this blog anyway, not when they start. Google’s SEO is a science that I don’t want a part of really. If someone lands on this blog it’s probably because of my rambling on mastodon or reddit trying to help out folks, and that’s fine. Certain posts are more popular than others, and I like to see what folks find useful.

There are wonderful web casts, blogs, and even books directed at new users. Like with anything else in life, there’s no magic bullet. Org-Mode is a huge system inside even a bigger one, Emacs. Many times I think the problem folks are trying to fix with packages and add ons are not the result of lack of options, but lack of knowledge of the options they already have.

I’m not trying to be harsh, but as a person who chased different organizational apps for about 10 years himself, switching from one to the other, I know this to be true for me. I know better than to convince folks to try something else: anyone who works in IT would tell you one of the hardest parts if to fight old users’ habits, as ineffective and insecure as they may be. you want to try Zettelkasten and org-roam? Then by all means, go ahead. It’s a good system. Just remember: at the end of the day you are the person who needs to organize your information, no matter what tool you use, be it paper index card which don’t come with amazing regex searches, or a complete database that comes with its own search capabilities. You’re in charge. I’m just here rambling.

My XCFE Experiment Works Out

KDE’s Plasma desktop environment has been my go-to on whatever Linux distro I use: Xubuntu, openSUSE, Manjaro, and now Debian. As I keep favoring terminal and Emacs over GUI, Plasma desktop is becoming more of a habit than necessity. I decided to dump KDE altogether and give XCFE another try with a new laptop build. Here’s what I have to say after a couple of weeks.

Open any “Linux porn” (hey, I didn’t come up with the concept) reddit or hashtag on twitter, and you’ll realize two things: all the cool kids use KDE, and all the cool kids try to make their Linux laptop look as close to a Mac as possible. I’m more of a grumpy IT guy than a cool kid, so I never bought into this camp to begin with.

After I got Debian working on an XPS 13 I had laying around[fn:59], It only took a day with KDE to have me go “Oh yeah, I wanted to try XCFE…” and dump the whole thing. These were my main grudges.

  1. KDE Wallet, a tool that’s supposed to help with credentials, was the only way to save my WiFi password. When disabled, I had to re-type my wifi password every time I reactivated the laptop (from hibernation as well). There are different tips online, like deactivation wallet (fails) or give all users permissions to the WiFi in the adapter’s GUI setting (also fails). I just hate that no one asked me if and how I want to save my passwords, this feels too much like a Microsoft trick.
  2. I don’t need KDE Wallet, I use KeepPassXC. But you know what else I don’t need? KMail, Akregator, Konversation, KTorrent, Konqueror, Kontact, and the list goes on. Some of these apps are easy to remove, some… not so much. But why do I need to uninstall anything? I understand these are good apps, I understand they are helpful and give you that “streamline fluid experience” or whatever sexy words you want to use, I don’t care. No thank you.
  3. This one is definitely a pet-peeve: time format. KDE’s time management system is coded with Qt[fn:60] which doesn’t follow the standard time flags you’d use elsewhere. I’ve lived in the US for most of my life at this point, but I’ll never get used to the ridiculous time recording system here. I record dates in a yyyy-mm-dd format, and I use a 24-hours digital clock. It makes sense, It always has, and it always will. The fact that I can’t change that simple thing and have a different time format while keeping my locale inside the US is driving me crazy. The only way to (kind of) fix this is to dive deep into KDE’s individual settings for the lock screen, then the splash screen, and then into the different widgets, and change them all. Because this is the locale, it can mess things up for apt and other installations, so I just learned to live with a thorn at my side.

With XCFE there’s barely any software I don’t need. I still need to uninstall the Libre Office suite, but that’s about it. The wifi works without me needing to enter the passphrase every single time, and time is showing in a sane manner by default. I already have a 24 hour clock with a yyyy-mm-dd date and a week number (my preferred time unit to measure projects and progress) on my top panel.

The lower panel is where you’ll find the only feature I happily “stole” from macOS: Plank, which imitates macOS’s genie-affect when hovering over the app icons at the lower panel, with active apps having a bright dot under them. I find that it makes more sense to use Plank on a small screen, where the otherwise normal panel will make the icons to small to see clearly.

XCFE comes with options that still allow me to customize its appearance the way I want it to be. Right now I have a dark central theme with my window manager that does away with XCFE outdated original version at the top right, a half-transparent panel at the top that gives only minimal information, and even a little owl icon for the launcher, just for kicks.

images/2020-09-23.png {{< rawhtml >}} <figcaption>My XCFE environemnt</figcaption> {{< / rawhtml >}}

Not everything is perfect of course, and I still need to implement a few tweaks. Some of the system icons, for example, cannot be replaced by the themes I downloaded and look even more out of place now next to the modern looking ones. KDE had a pretty simple way to have the Caps Lock key work as control, which is one of the first thing I do on any machine I use Emacs on, but it’s not as easy on XCFE. The language indicator at the panel doesn’t seem to work, or not work right, and I can’t use it to switch languages (keyboard shortcut works though). These are things I will need too look into to fix or get used to, but overall I think XCFE looks fine on the laptop.

Footnotes

[fn:59] This wasn’t easy. Debian doesn’t “like” proprietary firmware, like the kind that is needed for the XPS’s NIC, which means you don’t get wifi out of the box, you have to work for it. There was another issue with grub getting lost somewhere in the boot partitions which required re-installation from a live USB, two things that would probably discourage users from using Debian and run back to Ubuntu. Most users who choose Debian though are slow and patient folks who enjoy learning how things work and spend time on IRC channels, which ended up working in my favor.

[fn:60] I am not an expert on this, but I looked into changing the lock screen and login time formats and it requires tweaking configuration files deep inside KDE’s folders. This never worked fully: I was able to get the lock screen time or the logon screen time corrected, but never both.

privacy

When I started researching privacy more seriously, I didn’t know what I was looking at is the face of a forlorn, desolate path that seems to go nowhere. I picked it up as a challenge, and within a few days, it became clear that privacy is, unfortunately, a journey of hard sacrifices to make.

I asked for help on Reddit’s privacy community and I was blocked from posting because my freshly-created account specifically for this purpose was too suspicious. It was just one stop of many I’ve made in the last two months. Article after article from Medium to Lifehacker was filled with begginer tips like “use VPN,” or “search with Duck Duck Go.” As I kept digging deeper, looking for more specifics and advanced techniques, the more it seemed that the internet I use every day suddenly had an end. A wall.

images/2020-10-26.gif

Everything on the screen had a dictated sense of purpose. I was supposed to follow the “good” answers laid in front of me: buy a visa gift card with your credit card; submit your phone number to get a discount; surrender your verified email address to chat with an agent. Not a single website gave additional options. Everywhere I looked, I had to give up a piece of my privacy or to lie and hope for the best. I chose the latter option whenever I could. The feeling that I’m doing something wrong intensified.

Then there’s the loneliness. I haven’t logged into Facebook for years, living with most of my friends and past co-workers behind. I’ve stopped using Twitter last year, and with it, I stopped following trends and celebrities in my industry. Most recently, I stopped using Instagram and lost touch with those who liked my photos, a window to the outside world especially during the pandemic. On the other hand, getting in touch with me became more difficult since I insist on less popular apps that no one wants to download. When I try to explain why, it often feels like I’m speaking a different language even with those close to me the most.

This seems to be the price for privacy. I am almost at a breaking point and I’ve only started. I understand now why there so little real information about true privacy: Those who stay out of the familiar platforms live in the shadows by default. It’s not that they try to hide, they just do what they can do to not to sell their lives away, which means they can’t reach us or we them. You can’t Google them up, friend them on Facebook, Follow them on Twitter… So they don’t exist to us, automatically-suspected criminal of the dark web.

So much of what we do, what we are, is not even ours to share but instead borrowed, hosted somewhere unknown to us, supplied only if we sign an agreement to give it up. And we don’t care to know. After all, when was the last time you read Facebook’s TOS? Twitter’s? Gmail’s? Apple’s? Instagram’s?

I wish I could tell you this is all just a cheap Matrix ripoff. I wish I could go see a therapist who’d assure me I’m exaggerating. The problem is is that I am better. I can’t unlearn what I’ve learned, unread what I’ve read. Instead, I’m figuring out how to bend the rules just enough to be more than a sheepish user.

Not Writing About Privacy

  • intro

Whenever I start writing about privacy I feel negative. It’s a complex feeling. Part of it is the nagging feeling that I’m doing something “wrong” or even “illegal.” Years of social conditioning about how we shouldn’t have nothing to hide or sneaking around safeguards companies put out there to, partially, protect us from “hackers” has borrowed deep into my brain. There’s a good world, an ignorant world, a world padded by walls made by companies whose services we depend on day in and day out, and I don’t want to undo this feelings on this blog.

There’s also the depressing lonely part of it, a deep pit that I don’t like to dwell in. There aren’t many outside sources that really lay out how to do things from a to z. Everything I’ve discovered and continue to learn is a collection of tips and guides baked into my own recpie, and no-one’s comming for dinner. These sorts of stragedies cannot be discussessed publically, but this forum is public, and I want to keep it this way. Discussing with others is important to me.

  • It feels wrong as in “illegal” and depressing

Then there’s the part that TAONAW is not a good example of annoyimity. While I created this blog under an alias and the account his hosted on a seperate account on Github, it wasn’t meant to shielf my privacy. It was created to create a writing environment with emphasis on productivity.

If I ever create a true privacy pulic document, then it fits that it would be as private as I could get it, a result of the studies I’ve done so far. TAONAW is not this place, and it shouldn’t be this place for the reasons I expressed above.

  • It’s not private, this blog isn’t

Learning didn’t take me down a clear path. It has been a challange of picking up hints and finding up breadcumbs. It is a fluid skill that I have to keep learning as I go, keeping me on my toes. It is diffiuclt, challanging, depressing, pressuring. You can’t just learn it, you have to earn it, as a reult of conditioning your own mind. This is not something I can express in writing. Only those who are really obsessed with the idea find the holes they’re looking for, and to hand them out on a platter just goes against one of its primary principles. Tou have to be a bit crazy to find some of these things, and that sort of crazy need to be experienced first hand.

  • It’s a certain right of passage. I had to discover it myself, so should you.

Privacy: Getting Expendable Accounts

It took me some time to write my previous post about privacy. It took twice as long to publish this one. The methods below are not illegal, yet, as you read through them, you might get the feeling they might be. One of the questions that may rise would probably be “but why would I go to such extremes?”

You don’t have a choice. Not if you trouly want a blank slant. What I’ve written here is an introduction to an introducation. As I’m discovering techniques I write down what I’m learning and implementing the stragedies as I go.

Alternate email (easy)

This is probably the easiest one to attain. A throw-away email (aka burner email) for a limited time is excellent when you want to register for an online service. Considering the amount of junk we all get in our email acounts, it’s probably a good idea to always use one[fn:61].

One popular service is maildrop, but its popularity is also its weakness. Most services block it and will not allow an email from the .cc domain. Maildrop does not create unique email addresses either, so anyone fishing for popular email addresses on this service might end up with your emails. Don’t be that person who creates a “[email protected]” account, because chances are it will be found quickly.

Another service and long time favorite of mine has been fake mail generator, which creates legitimate looking email with different domains. This website yeilds mixed results; some online services work fine with it while other block emails address generated. It is quick to use however and can produce better results than maildrop.

Another popular website I’ve had high success rate with is temp mail. This one seem to generate a unique email address each time including a domain, and it filters bots with a captcha (if you’re behind a VPN, expect it to test you with an annoying “find all the airplanes” kind of thing) so it seems to be more resiliant to be blocked by online services. The problem with this one is that you can’t go back to an email you’ve used. Each email exists only as long as the browser is open. If you’re locked out of an account you’ve used this service with, you’re out of luck.

Alternate Google/Facebook/Twitter Account (medium difficulty)

While alternamte email address is easy, creating account with one of the big ones out there (Google, Facebook, Twitter etc.) is not nearly as easy. These services are designed to detect bots and spammers, and they place privacy seekers in the same bracket. Simply put, if you’re trying to hide your identity by using a VPN, changing your MAC address and logging from different locations, expect a high level of faliure. You’re going to have a hard time.

I’m going to focus on Google here since this has been a primary target in my research. Creating a new account from a computer is not that easy, and it seems that Google filters users based on operating systems as well as IP addresses. For example, I could not create a new Google account on my Debian VM without phone verification, but creating one from ChromeOS was easy.

Google will end up insisting on a phone number sooner or later. always asked me for a phone nunber for varification, and unless I wanted to give it my real one or a Google Voice number (which also points to me through the legitimate Google account its registered with) it failed to work, even with a valid VoIP number that works for calls and texting otherwise. As it turns out, it’s quite easy to filter out VoIP numbers vs. carrier phone numbers. To my surprise, even “landlines” (since the vast majority of these are also VoIP these days) are disqualified. Google, as well as Apple, Twitter, Facebook and the big other ones are interested specifically in your cellphone number. With one exception.

When you use a mobile device[fn:62] to create a Google account, Google lets you go ahead without a valid phone number. This is by design: after all, if you never had a cellphone before, how are you supposed to get one? The solution then is to use a mobile device from a legitimate, non VPN-protected WiFi. Your local Best Buy, Target, WalMart etc. are perfect for this. Just approach the new Android phone section with confidence and create an account using the phone’s browser (do not attempt to register a Google Account with the OS or with the actual apps. This will not work, as the device does not have a sim and is stuck in “demo” mode). Be prepared that the helpful sales persons will be interested in what you’re doing, so be swift and confident. After all, they’re just doing their job. Make sure you switch between phone, alternate between different apps like the camera or youtube, and basically don’t look suspcious.

A (good) alternate phone number (hard)

Getting a good alternate phone number is hard. There are services out there that will give you a free-to-use VoIP for voice calls and text outside of Google Voice (a favoriate one is text now, which allows you to create account with Facebook, Apple or Google accounts), but it seems Google Voice numbers are better than others. I got to this conclusion after some services who did not allow me to register with my text now number were fine accepting a freshly created Google Voice.

But getting a Google Voice number is harder than getting almost any other account, including Google. I was able to even open a Facebook account without a number verification, but this seems to be impossible to do with Google Voice[fn:63].

In order for Google Voice to grant you a number it must be verified with another number to which it will send a verification code. When I tried that with my text now number, I got the verification code, both in text and in a phone call, but the code failed to register back with Google Voice - it seems Google Voice simply ignored it. I tried to register with Google Voice off VPN and other large public WiFis but without success. The nature of this lack of authentication, where Google Voice simply “ignores” my input, have me believe that I need to keep trying with different WiFi’s.

Up until now my Google Voice number is tied to my real phone, and that means apps I use it with can track it back to me.

Methodology

I wrote the above headers from easiest to hardest based on my experience. This is also the order I believe these alternative accounts should be created to remain anonnymous:

  1. Create a Google Account from an Android phone that you do not own (or perhaps a virtual one).
  2. Open the Google account from a WiFi you do not own (such as a public WiFi in a public library, or a coffee shop). Do not use VPN. Change your laptop’s Mac address if possible (this is known as spoofing, pretty easy to do). Send some emails to people who you don’t know (this sounds crazy but it helps “baking” the account and make it more authentic. Remember that Gmail scans your emails for ads, this will help ensure it all looks valid).
  3. Now from behind a VPN (and possibly TOR browser, depending on your level of paranoia) Use the Gmail account associated with this account to register with services such as Twitter, Facebook and text now. Keep in mind Google will probably throw a bunch of capchas at you and you might loose your account at this point.
  4. Authenticate the Twitter and Facebook accounts with the number from text now, if possible. If they don’t accept it at one point, they might accept it at another - this has been my experience with Twitter a couple of times.
  5. Try to “upgrade” the text now account with Google Voice if possible using the same Google account you have.

Footnotes

[fn:61] Keep in mind that burner emails are highly unprotected and should not be used long term. If you don’t use a password manager like Keepass or LastPass, this would be another good time to consider: not only it will generate a strong password, it will also save the random email address you just recieved for future usage. Another adventage of the password manager is that you can use it to also create a random unique user name. As humans, we like patterns, and you’d be surprised how those random names you come up with are actually not random at all.

[fn:62] In my attempts to run an Android VM I was still encountering difficulties, probably because I was still behind a VPN. For privacy sake, I didn’t want to use a public WiFi near my home. Technically speaking though, I think you could get a number if you were to use your naked WiFi and an android VM - but that defeats the purpse…

[fn:63] Interestingly and Fortunately, one of the best apps is fine reciving Text Now authentication: Signal. I was able to register a new account with signal using the Text Now number, which means my Signal identity is not tied to any of my real accounts. If you use Singal this way from an Android VM, you can affectively have a truly anonymous and encrypted communications (you can download and installed the apk from the Signal website, no need to go through the Play Store)

About Josh Rollins

Josh Rollins is an NYC based self-made-techie who works with humans and machines and enjoys interacting with both. He explores automation, privacy, and affordability in tech. As a result, he often finds himself tangled in the blurred lines of the real and the ideal.

Rollins has broken his technology (this website included) many times over. While he feels disqualified of giving any good advice, he enjoys doing so just the same. With whatever spare time he has left, Rollins tries to upgrade himself and lead a semi-healthy somewhat sane lifestyle.

As an individual who shuns Facebook and Twitter, Rollins can be a bit tricky to find, but he can be reached at jarss at protonmail dot com, Reddit, or Mastodon.

TAONAW: The Art of Not Asking Why

This website is virtually free to maintain[fn:29].

TAONAW is created with Hugo Go, a free open-source site generator. It is designed and authored on a free Linux OS using Emacs org-mode and ox-hugo. Bandwidth is currently supplied free of charge by Github.

I write posts in my free time because I want to and I enjoy doing so. No one is supporting me, sponsoring me, or otherwise paying me anything to write or create or publish any content. The goal is to keep things this way.

If you enjoy what you find here, you can go ahead leave a one-time tip. This is not payment: you’re tipping me for a service I gave (a post you like), not a service you want to get (asking me to write a post).

A note about copying/using the code and other parts of this site: I use free tools and free knowledge to build this site, and you are welcome to take and use anything you’d like, except the logo. The logo of the owl and the book was not free and therefore, you may not copy or re-use it. It was created by another artist, who I hired for money. If you want the name, I’ll be more than happy to give it to you. Just ask.

You can also go ahead and copy the content of the posts as well, just please link back to this site. Think is just common courtesy.

Footnotes

[fn:29] The domain, joshrollinswrites.com, costs me about $15 and change with per year.