Daily Ramblings

For future reference

==> Caveats
A CA file has been bootstrapped using certificates from the system
keychain. To add additional certificates, place .pem files in
  /opt/homebrew/etc/openssl@1.1/certs

and run
  /opt/homebrew/opt/openssl@1.1/bin/c_rehash

openssl@1.1 is keg-only, which means it was not symlinked into /opt/homebrew,
because macOS provides LibreSSL.

If you need to have openssl@1.1 first in your PATH run:
  echo 'export PATH="/opt/homebrew/opt/openssl@1.1/bin:$PATH"' >> ~/.zshrc

For compilers to find openssl@1.1 you may need to set:
  export LDFLAGS="-L/opt/homebrew/opt/openssl@1.1/lib"
  export CPPFLAGS="-I/opt/homebrew/opt/openssl@1.1/include"

I did some operation with brew that tried updating git, and it broke things with:

xcrun: error: invalid active developer path (/Library/Developer/CommandLineTools), missing xcrun at: /Library/Developer/CommandLineTools/usr/bin/xcrun

Seems like there is a way around it using:

xcode-select --install

which will get the development packages back

Ref: https://stackoverflow.com/questions/52522565/git-is-not-working-after-macos-update-xcrun-error-invalid-active-developer-pa

The above did not solve the issue.

Had to finally do a remove of git from home-brew

brew uninstall git

You can confirm that it worked by typing

which git

which should not return you a directory containing homebrew in the path

%google-sudoers ALL=(ALL:ALL) NOPASSWD:ALL

This is something that I need to check on quite often to test if the proxy works as expected:

curl -x <proxy-ip>:<proxy-port> http://ident.me -v --proxy-user '<proxy-username>:<proxy-password>'

Note the content in <> tags needs to be replace by the appropriate variables

The above command will hit http://ident.me via the proxy, and the IP returned should be the proxies IP

If the proxy auth fails, the response will be:

< HTTP/1.1 407 Proxy Authentication Required                                                                           
< Server: squid/3.5.23                                                                                                 
< Mime-Version: 1.0                                                                                                    
< Date: Thu, 14 Jan 2021 13:40:28 GMT                                                                                  
< Content-Type: text/html;charset=utf-8                                                                                
< Content-Length: 3577                                                                                                 
< X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0                                                                             
< Vary: Accept-Language                                                                                                
< Content-Language: en                                                                                                 
< Proxy-Authenticate: Basic realm="proxy"                                                                              
< Connection: keep-alive                                                                                               
<                                                                                                                      
* Ignore 3577 bytes of response-body                                                                                   
* Received HTTP code 407 from proxy after CONNECT                                                                      
* CONNECT phase completed!                                                                                             
* Closing connection 0                                                                                                 
curl: (56) Received HTTP code 407 from proxy after CONNECT

I have always wanted to make some contributions on Github and for some reason or another I have never been able to pull it off before. But that ends today as I made way more commits today compared to the last few years of me having that account (Mostly an exaggeration).

The new way of working involves creating tickets or issues and then solving that by adding more and more code and comments. Putting your thought process in the issues helps as you're more clear as to what you want to be doing.

I hope to keep doing this every day hopefully allotting more and more time to it every single day.

Probably need to ask for help to get unit test added to the code that I write because I lack it right now. Hopefully I might get help from people who have been in such a predicament before and know better.

What are the things that I cannot live without in a new laptop?

It is a good opportunity to explore this question when you are using a new laptop and a new OS altogether.

Will try to list down the things that I required to install on my new laptop and how I went about it when using Mac OS

Applications

This can be split between two categories:

Day to day use

  1. Discord (Drag application)
  2. Telegram (App Store)
  3. Wireguard (App Store)
  4. Visual Studio Code (Running the binary)

Tech Plumbing

  1. python3 – Required installing Xcode as it is already provided with it. The version that I got was 3.8.2
  2. lua – needed to install brew to get this working. Finally got version 5.4 of Lua installed (All my Lua automations should work flawlessly with this)

Other stuff

  1. Installed Notability as I believed that it will allow me to sync notes from my iPad Easily. Do not have high hope for it though

So finally I took the leap of faith and finally decided to get a new laptop. And while I was going that way, I decided to go all in for the same.

The laptop that I decided to go for was the newly launched MacBook Air M1 with the new ARM based desktop processor that seems to be pretty powerful, and at the same time pretty power efficient.

They do cost a lot, but I believe for the portability benefits that it provides, it is worth the money.

I ordered it on December 8 and expected it to be delivered in a day or two – like a similar order that I placed on the apple website. I was wrong. As I had customised the model to get me the 16 GB variant, it took way longer than expected. I got no update on it since I placed the order and was shipped only a few days prior to today. In addition to that even after it was shipped, blue dart was adamant that the delivery date will be 2nd Jan.

I sulked and came to my native place and risked nobody being at my home in the city. And guess what I get a text message on that specific day that the order was getting delivered that very day. Somehow I got it delivered to my native place.

Now all that remained was to set things up and learn how to navigate the not so intuitive UI that Mac OS provides.

Initial impressions: 1) Awesome build 2) Awesome battery performance 3) Keyboard is pretty good 4) Mac OS seems to not have good user feedback system * There were scenarios where there was no feedback provided when confirm was clicked * Some times during the initial setup, it felt that the OS had hung as it gave no visual indication that it is processing something.

Rosetta works awesomely and I have not seen any slowdowns due to performance.

Will update once I find more.

This took some effort as the OS was in an inconsistent state when I installed wireguard using the buster-backports repo that debian provides.

The problem I was facing involved the kernel module for wireguard not being compiled as seems like the newer version of the kernel had been installed – and wireguard-dkms was installing for the newer version of the kernel, but the raspberry pi was not rebooted and hence was running the older kernel for which the dynamic kernel module did not exist.

I had to do a full system upgrade using:

apt update
# followed by
apt upgrade

and a reboot using shutdown -r now.

Even after the reboot, there seemed to be some issue with the loading of the module. So as a last step before pulling out the bigger guns, I made wireguard recompile the kernel module using

dpkg-reconfigure wireguard-dkms

That made it run as butter.

Today was the first time I saw how hard the package manager apt works while I was doing an upgrade operation.

The following was the error I encountered:

E: Failed to fetch http://mirror.ossplanet.net/raspbian/raspbian/pool/main/g/gnutls28/libgnutls30_3.6.7-4+deb10u5_armhf.deb  Hash 
Sum mismatch                                                                                                                      
   Hashes of expected file:                                                                                                       
    - SHA256:dfd3e885cc7090d951d3e00fa9ef6f2f2b462dd152bfc43c456c077cc3ea9351                                                     
    - SHA1:303ba905d338ba521d56f3fac7d895bbc849b267 [weak]                                                                        
    - MD5Sum:b7d05c3214dd4803f1a2edc9daeb2634 [weak]
    - Filesize:1047252 [weak]
   Hashes of received file:
    - SHA256:b56d15a14d8f63a723238809564d6975ee4e60a0bc946f73f4dfaa69b2d5f6b2
    - SHA1:c64dda427cdd23e33496ff31e047d1a4f98f0d59 [weak]
    - MD5Sum:f213b6f673794ed6a00585db117ed9b8 [weak]
    - Filesize:1047252 [weak]
   Last modification reported: Sat, 01 Aug 2020 22:08:12 +0000
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?

What this means that the downloaded file did not match what it expected, and it tried three different hashing algorithms – probably to make sure that hash collisions in one will most probably not have a collision in others.

Running and apt update once again fixed this pretty easily and I was able to go ahead with the upgrade without any issues.

Pretty humorously framed: https://apenwarr.ca/log/20170810

I would like to build an internet archiver to archive these posts.

What it will essentially do is:

1) Get HTML content of the link that I gave it 2) Save it in a key value store with all links in the page updated 3) Recursively save all links to to the key value store

It should allow for:

1) Content to be loaded just as it looks (No dynamic content) 2) Links should work and point to a valid page

Thinking a bot more about its design:

1) Rather than referring to the actual link in the query parameters, it will be better to use a hash and have a reverse map of hash to url if required in future (or perhaps this can be stored in a comment in the HTML body that will be stored) 2) So the crawler will work like this: * Get HTML content * Insert the url crawled as a comment in the head of the HTML * Save this content to a key value db with the url hash as the key and modified HTML as the body. * Add a link to the above in some other place to start the navigation – something like sections – or index it using elasticsearch so that we can build a search engine on the articles.

This does sound like the wayback machine, but its objective is just to preserve content that I want to preserve – knowing the changes is not required at all, and this will mostly be a personal collection of things.

Some day I will find the time to implement this.