==> Caveats
A CA file has been bootstrapped using certificates from the system
keychain. To add additional certificates, place .pem files in
/opt/homebrew/etc/openssl@1.1/certs
and run
/opt/homebrew/opt/openssl@1.1/bin/c_rehash
openssl@1.1 is keg-only, which means it was not symlinked into /opt/homebrew,
because macOS provides LibreSSL.
If you need to have openssl@1.1 first in your PATH run:
echo 'export PATH="/opt/homebrew/opt/openssl@1.1/bin:$PATH"' >> ~/.zshrc
For compilers to find openssl@1.1 you may need to set:
export LDFLAGS="-L/opt/homebrew/opt/openssl@1.1/lib"
export CPPFLAGS="-I/opt/homebrew/opt/openssl@1.1/include"
I have always wanted to make some contributions on Github and for some reason or another I have never been able to pull it off before. But that ends today as I made way more commits today compared to the last few years of me having that account (Mostly an exaggeration).
The new way of working involves creating tickets or issues and then solving that by adding more and more code and comments. Putting your thought process in the issues helps as you're more clear as to what you want to be doing.
I hope to keep doing this every day hopefully allotting more and more time to it every single day.
Probably need to ask for help to get unit test added to the code that I write because I lack it right now. Hopefully I might get help from people who have been in such a predicament before and know better.
So finally I took the leap of faith and finally decided to get a new laptop. And while I was going that way, I decided to go all in for the same.
The laptop that I decided to go for was the newly launched MacBook Air M1 with the new ARM based desktop processor that seems to be pretty powerful, and at the same time pretty power efficient.
They do cost a lot, but I believe for the portability benefits that it provides, it is worth the money.
I ordered it on December 8 and expected it to be delivered in a day or two – like a similar order that I placed on the apple website. I was wrong. As I had customised the model to get me the 16 GB variant, it took way longer than expected. I got no update on it since I placed the order and was shipped only a few days prior to today. In addition to that even after it was shipped, blue dart was adamant that the delivery date will be 2nd Jan.
I sulked and came to my native place and risked nobody being at my home in the city. And guess what I get a text message on that specific day that the order was getting delivered that very day. Somehow I got it delivered to my native place.
Now all that remained was to set things up and learn how to navigate the not so intuitive UI that Mac OS provides.
Initial impressions:
1) Awesome build
2) Awesome battery performance
3) Keyboard is pretty good
4) Mac OS seems to not have good user feedback system
* There were scenarios where there was no feedback provided when confirm was clicked
* Some times during the initial setup, it felt that the OS had hung as it gave no visual indication that it is processing something.
Rosetta works awesomely and I have not seen any slowdowns due to performance.
This took some effort as the OS was in an inconsistent state when I installed wireguard using the buster-backports repo that debian provides.
The problem I was facing involved the kernel module for wireguard not being compiled as seems like the newer version of the kernel had been installed – and wireguard-dkms was installing for the newer version of the kernel, but the raspberry pi was not rebooted and hence was running the older kernel for which the dynamic kernel module did not exist.
I had to do a full system upgrade using:
apt update
# followed by
apt upgrade
and a reboot using shutdown -r now.
Even after the reboot, there seemed to be some issue with the loading of the module. So as a last step before pulling out the bigger guns, I made wireguard recompile the kernel module using
Today was the first time I saw how hard the package manager apt works while I was doing an upgrade operation.
The following was the error I encountered:
E: Failed to fetch http://mirror.ossplanet.net/raspbian/raspbian/pool/main/g/gnutls28/libgnutls30_3.6.7-4+deb10u5_armhf.deb Hash
Sum mismatch
Hashes of expected file:
- SHA256:dfd3e885cc7090d951d3e00fa9ef6f2f2b462dd152bfc43c456c077cc3ea9351
- SHA1:303ba905d338ba521d56f3fac7d895bbc849b267 [weak]
- MD5Sum:b7d05c3214dd4803f1a2edc9daeb2634 [weak]
- Filesize:1047252 [weak]
Hashes of received file:
- SHA256:b56d15a14d8f63a723238809564d6975ee4e60a0bc946f73f4dfaa69b2d5f6b2
- SHA1:c64dda427cdd23e33496ff31e047d1a4f98f0d59 [weak]
- MD5Sum:f213b6f673794ed6a00585db117ed9b8 [weak]
- Filesize:1047252 [weak]
Last modification reported: Sat, 01 Aug 2020 22:08:12 +0000
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
What this means that the downloaded file did not match what it expected, and it tried three different hashing algorithms – probably to make sure that hash collisions in one will most probably not have a collision in others.
Running and apt update once again fixed this pretty easily and I was able to go ahead with the upgrade without any issues.
I would like to build an internet archiver to archive these posts.
What it will essentially do is:
1) Get HTML content of the link that I gave it
2) Save it in a key value store with all links in the page updated
3) Recursively save all links to to the key value store
It should allow for:
1) Content to be loaded just as it looks (No dynamic content)
2) Links should work and point to a valid page
Thinking a bot more about its design:
1) Rather than referring to the actual link in the query parameters, it will be better to use a hash and have a reverse map of hash to url if required in future (or perhaps this can be stored in a comment in the HTML body that will be stored)
2) So the crawler will work like this:
* Get HTML content
* Insert the url crawled as a comment in the head of the HTML
* Save this content to a key value db with the url hash as the key and modified HTML as the body.
* Add a link to the above in some other place to start the navigation – something like sections – or index it using elasticsearch so that we can build a search engine on the articles.
This does sound like the wayback machine, but its objective is just to preserve content that I want to preserve – knowing the changes is not required at all, and this will mostly be a personal collection of things.