Tuesday, August 11, 2020

Enabling SSH Access to WSL2 from DIfferent Computer

WSL2 is such a blessing to Windows. As much as I really love Linux, in some laptops, Linux simply aren't supported well. It's no Linux fault, though... Many hardware vendors simply don't provide Linux drivers for their hardware, making the experience on Linux being sub-par when compared to its Windows counterpart. As an example, I own an ROG ASUS Laptop, GL503 that I use for entertainment purposes. Linux didn't work properly out of the box on this laptop whose 120hz screen requires it to run on NVIDIA GPU all the time. Luckily, WSL2 comes to the rescue! From being an entertainment-only machine, I can finally use this laptop to do some development work! What a convenient!

Okay, that's enough chit-chatting.. This time, for a development purposes, I need to SSH from a different machine, into the WSL2 instance running on my Windows machine. This turns out to be a bit tricky. So here's how you do it:

1. Instal openssh server on WSL2: `sudo apt install openssh-server`

2. Modify openssh server configuration

```

# 1. Update port from 22 to something else, i.e. 8828. The reason is because port 22 already reserved by Windows
# 2. Uncomment and change ListenAddress to 0.0.0.0
# 3. Uncomment and change PasswordAuthentication to yes
sudo vim /etc/ssh/sshd_config

# Generate host keys
sudo ssh-keygen -A

# Restart openssh server
sudo service ssh restart

# Note WSL2 ipaddress
ifconfig

```

  • Now, try to SSH into the WSL from Windows (of the same computer), using the WSL2 ip address from before. You can do that using Putty or terminal. If it fails, use -vvv flag to see why it fails. Make sure this works before proceeding to the next step
  • Now, forward connection from Windows' ip address into WSL ip address by running command like this: `netsh interface portproxy add v4tov4 listenport=8828 listenaddress=0.0.0.0 connectport=8828 connectaddress=172.27.136.236` 
  • Then try to SSH into the WSL machine again, but this time use Windows ip address. Only proceed if this was successful
  • Now, you would think it's over right? Not quite... Last step is to change Windows firewall setting to allow SSH to port 8828 from outside [1]
    • Navigate to Control Panel, System and Security and Windows Firewall.
    • Select Advanced settings and highlight Inbound Rules in the left pane.
    • Right click Inbound Rules and select New Rule.
    • Add the port you need to open and click Next.
    • Add the protocol (TCP or UDP) and the port number into the next window and click Next.
    • Select Allow the connection in the next window and hit Next.
    • Select the network type as you see fit and click Next.
    • Name the rule something meaningful and click Finish
  • That's it!


Hope this is helpful and could save you some time!


Reference:
1. Changing firewall setting on Windows

Monday, April 20, 2020

Perceptual Loss / VGG Loss Function - Is this the Magic Behind Magic Pony Technology Hundred-Millions Acqusition?

MagicPony is a deep-learning startup that got acquired by Twitter back in 2016. There wasn't much written about what they did in order to be valued at $150 millions, except that they have something to do with better video compression technology. From poking around online and looking at the Zehan Wang's -- MagicPony's CTO -- background, it seems Deep Learning-based Super Resolution is at the heart of their technology. This seems to be further confirmed by the paper published after the acquisition happened: Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network.

 Deep learning-based Super Resolution is not a novel technique. There have been several papers published on them that I read back in 2013. But there were several challenges, such as the slowness of algorithm FPS and that the result isn't much better than a conventional bicubic interpolation. What MagicPony seemed to have been able to achieve is a way to do it faster and much better.

The paper published by MagicPony describes a GAN-based technique to do Super Resolution. One of the most interesting aspects of the paper is on how they use Perceptual Loss / VGG Loss in order to compute the lost function between original high-res image vs. the deep learning-upscaled image. On the paper they mention it as one of the key factors to get significantly better result. So what it's? It's one of the convolution layers taken from VGG-16 model pre-trained on ImageNet dataset. There was another paper that dissects, and tries to understand the different layers on VGG-16 model by trying to visualize each of the layers. They found out that a particular layer can be leveraged for a better way to perceptually compare two images (i.e. versus using naive pixel-based MSE loss function)

I can help to wonder about the following:
1. Would a more traditional CNN-based Super Resolution technique (i.e SRCNN) yields as good as a result by simply swapping its loss function with VGG loss function?

Friday, October 4, 2019

Understanding How Anbox (Native Android App Runner on Linux Host) Works

2019-10-24

One of the things I really like about computer science is the creativity part. I often think about possibilities of things. And a couple weeks ago, I was thinking that it'd be sleek to be able to run Android application natively on Linux OS. There are already simulator/emulator-based solution, but they typically have performance impact and not-so-nice user experience. For example, the Android device emulator in Windows 10, the official one that comes with the latest version of Android Studio often causes 99% of CPU usage even when it's not running anything. From googling around, I learned that it was because of the audio subsystem. Disabling audio seems to fix that issue. But this got me thinking if there's a more native way to run Android application on Linux, ideally by using the host machine's Linux kernel. It should be totally doable because Google's Chrome OS already does that. I'm pretty sure Chrome OS doesn't use emulation-based technique because a lot of the Chromebooks out there don't have fancy specifications.

And just like so many other things, somebody already did it! I found Anbox (https://anbox.io/) through a quick Google search! This thing is kinda cool. So it uses DKMS, which is a way to compile kernel module dynamically to whatever host Kernel is running on. DKMS is used to compile binder and ashmem, which
are IPC mechanisms used in Android. These two are specifically created by Android author for Android, so they're not available on the mainstream Kernel. Anbox put these DKMS modules are on separate github: https://github.com/anbox/anbox-modules, which is essentially the kernel drivers code forked out of Android kernel, and then modified to work on host device. Installation can be done by looking at the README file over there.


Installing Anbox is quite painful in Arch Linux. According to its manual, it should work flawlessly on Ubuntu with snap package, unfortunately it doesn't work with Arch.  I tried doing yaourt anbox but it didn't work.  So I had to follow the guide over here: https://wiki.archlinux.org/index.php/Anbox. I cloned each of the AUR repositiories mentioned there, and then installed them manually using makepkg command.

After minor modifications, I was able to finally install all the modules mentioned on the Arch Linux's wiki page. Unfortunately, after that things still just don't run out of the box. I had to peek PKGBUILD and anbox source code (https://github.com/anbox/anbox-modules) to figure out how they can be run. So after everything is installed, we need to run 2 things, container manager, and session manager:
1. sudo systemctl start anbox-container-manager.service
2. systemctl --user status anbox-session-manager.service

After the two are run, if we do `adb devices`, we should see one new device shows up. This is our Android device, with which we can install APK and stuffs.

Throughout this process, I learned about DBus, which is a commonly used IPC mechanism in Linux, this is being used by Anbox to communicate with each other. anbox session-manager creates a DBUS server, which is used by anbox wait-ready to communicate with container manager about the status such as whether the Android device is created

2019-10-07

Today I tried to understand how source code (https://github.com/anbox/anbox) work internally from high-level perspective. So I started off the main.cpp file under src/ directory, which is the entry point of the program. While browsing the code, it appears to be using a lot of newer C++ features, such as:
I was actively using C++ when working at NVIDIA 2013-2015 doing Android OS-related work, but I never know such features exist. I guess they're newer features or amongst the set of features that are "banned" by Google in C++ -- throughout Google, there are only subsets of C++ features allowed to be used -- Looking through this Anbox codebase, I kinda understand now why Google bans some of C++ features. They're just so confusing and adding quite significant amount of time to get up to speed with a codebase. I can see those features being useful, but they're just not necessary IMO. Anyway, let's get back on topic.

So container manager, session manager, and the client side to launch a new Android application are run through this same entry point: main.cpp. Each of the possible "commands" is defined by a Command class, where there's an action callback which gets called when that command is run through terminal. For example, "anbox session-manager" ultimately invokes the action callback inside of session_manager.cpp (line 119, by the time of this writing). Different commands are separated into different files under src/anbox/cmds, which makes it easy to figure out how they work.

From my experience working with large codebase (AOSP), the best way to understand how it works is to go backward from running the application, find portion of the program that is interesting, then backtrack to figure out which code path does that interesting thing. In anbox's case, for me personally,  one thing that piques my interest is when the Android operating system starts to "kicks in". I want to understand how the host OS starts running the LXC container that contains the "Android operating system". From my experience with AOSP, it's much alike Linux OS in general. Bootloader boots up, then it bootstraps the kernel, then the kernel runs the very first program, init process, which then runs zygote which handles all the JVM-related things.

In order to understand how Android "kicks in", I took on how "anbox launch" command works (i.e. "anbox launch --package=org.anbox.appmgr --component=org.anbox.appmgr.AppViewActivity"), which when called, opens up a new window with the specified Android application running there (i.e. Whatsapp). So what happens is that main.cpp creates an instance of Daemon call (daemon.cpp), then invokes its run function with the program arguments. Pretty straightforwardly, daemon.cpp invokes Launch class, which is the subclass of Command class. After that, the  action callback inside of Launch's constructor is called. First thing it does is to creates and displays a splash screen. It then uses D-BUS to talk to the other daemon, which I'm not sure yet whether it's container manager or session manager. Through RPC, it tells the daemon that it wants to run an Android acitivity whose package and component names described on the program arguments.

TO BE CONTINUED....

Tuesday, November 27, 2018

Why all Javascript Developers Should Learn Typescript

First of all, I wanna talk about my own motivations behind learning Typescript. Having been coming from statically-typed language background, when I first introduced to dynamic language (Javascript ES6) in a non-toy project, it felt positive in many ways. It was particularly powerful for prototyping a program where the structures weren't well determined yet. In my case, it was due to business requirement that was still ambiguous, so things were inevitably changing. It was also convenient because dynamically-typed language is often also a scripting language. Being able to save the code and immediately run it without having to wait for compilation time is just so convenient.

But as soon as my code-base grew large, Javascript became hard to maintain. Scaling up my medium-size code base became very hard because:
1. Special discipline needed to be ensured in order to make the code readable and maintainable (i.e. by manually documenting input and output of each functions)
2. Refactoring became very hard as there's no easy way to 100% ensure where a function or a class is being referenced from, which implied hardship when renaming a function, class, or file, and also while changing a function's parameter or return type.
3. The absence of type-checker also caused many mistakes that would have been easily caught during the compile phase of a statically-typed language to be hidden until run-time, hence requiring super rigorous testing. (don't forget how Javascript is sometimes crazy, i.e. == vs. ===)
4. Overseeing and reviewing beginner developers who didn't yet have the discipline was also very hard, particularly because the language did not "force" them to think about data types and structures before implementing something, which often times resulted in them writing spaghetti code.

Typescript is wonderful because it combines the best of both static and dynamic typing. It is a superset of ES6, which means all ES6 code will work as-is without requiring any changes at all. This allows one to quickly prototype a code using regular dynamically-typed Javascript, and then slowly adding type definitions into it as the requirement steadily becoming more mature. Typescript code also interacts fine with Javascript code. So yeah, you're not gonna run out of libraries to use as you can use everything you find on NPM. It still does require a lot of discipline in order to make the type system useful though, because one can simply turn a ".js" file to ".ts" without actually leveraging the power of the type system, duh! But I guess an effectiveness of a tool always depends on the hands of its users anyway. But yeah, it's a real solution to keep a large JS code-base more maintainable while still being versatile when needed.

Visual Studio Code is another reason why one would wanna use Typescript. It supports all the fancy stuffs that static language IDE supports such as auto-refactor, auto-complete, jumps to definitions, symbols, etc. If you ever used and came to appreciate typical Microsoft IDEs, this feels similar. And not only that, this time Microsoft chooses to publish its source code on GitHub! So yeah, we no longer have the worries that typically come with using closed-source programs: fell in love then got heart-broken because things didn't go as hoped.

Another thing to love about Typescript is how powerful its typing system is. Even if you only read its documentation, you'll probably guess that Microsoft had a bunch of world-class programming language experts and PhD holders working on this super-smart typing system. One thing I enjoy is its structural typing system where a type is compatible with another as long as it has the property expected out of it. So this is kinda like duck-typing, but typed, meaning that the transpiler is going to yell if you try to pass in incompatible types. It also has supports for type union, intersection, etc, just like what one would expect out of a typical modern programming languages.

In the next blog post, I'm gonna write about how I refactored my NodeJs ES6 code-base into Typescript.

How I Come Back from a Debilitating Back Pain to be a Productive Programmer

First of all, I want to tell you that I'm not somebody with medical background. I'm just a regular software engineer who happened to suffer from mild herniated disc a few years ago, perhaps due to my profession which involved a lot of sitting on a regular basis.

About 3 years ago in 2015, I began feeling mild lower back pain everyday at the end of my work day. It wasn't painful enough that it impaired my life, but it was uncomfortable enough that I would sleep on a carpeted-floor every night thinking it would help. At that moment, I didn't know what it was. I just assumed I lacked exercises and stretches.

3 years fast-forward to early 2018, my lower-back got painful enough that I can no longer sit down for more than 15 minutes. After 10-15 minutes of sitting, the urge to get up was too much due to the pain. Here I began to go see different doctors. One semi-traditionalist Chinese doctor practicing bone-related therapy told me it was because I had a mild scoliosis. He observed that my back bend to the right side a little bit. I went for therapy with him for a few months without much improvement. At this time, it had been hard to continue my life as a programmer as I can't barely sit down. Even when laying down in bed -- something considered the best remedy for back-pain sufferer -- I still felt uncomfortable. Everyday had been a real struggle that I can only offer up to God.

One day, somebody suggested me to see a nerve-specialist doctor. I did, and after describing my symptoms: stinging pain that went from the leg up to the lower-back on sitting down and stiff lower-back muscles, she immediately told me to get MRI and X-Ray. Turned out that I had a mild herniated disc. The pain I experienced was a sciatica, a pain that has something to do with the sciatic nerve. In my disc herniation case, the sciatic nerve got pinched by the discs on my back, between L4-L5 and L5-S1 sections. Apparently this sciatic nerve is one of the largest nerve root in the body, spanning from the leg all the way up to the back. This perfectly explained why I felt stinging pain on my legs when sitting down. I had always been confused as to why my legs felt painful on sitting!

My nerve doctor told me that although my pinching is still mild, if it's not treated seriously, it can get pretty bad. One might not even be able to have a kid as impaired sciatic nerve could cause erectile dysfunction! Scary, scary.. She told me I had to go swimming back-stroke for 1-hour a day, everyday. It had to be back-stroke, no other forms. Breast-stroke is particularly prohibited because it'd put strain on the lower back. She said she didn't want me to go for a surgery because it's still mild and should be treatable organically. She also didn't want me to go to chiropractor or some similar means, because if it's not done properly, it could have significant consequences. And getting it done properly is a challenge as there could be different root causes for different people.

So I did what my doctor suggested. I diligently swim everyday for like 3 months. After that I felt a lot better. I could sit down for 30-60 minutes although I would still feel very painful afterwards. I had to lay down on the bed for about 1 hour after I finished a 45-60 minute session of a sitting session. Although the struggle was still there, I was very grateful. I could still imagine how impaired my life was before meeting the doctor.

At this point, I began experimenting. I tried watching many self-help videos on Youtube and tried googling around. I highly recommend Athlean-X Youtube channel. The guy who made the videos is a physiotherapist by profession, specializing in sport injuries. I wouldn't say that doing the exercises suggested in those self-help videos cured me, but some of them had definitely helped me alleviating the pain. Generally, I suggest being very cautious about experimenting with self-help videos, though. Try to read on the comments and try to critically study all the suggestions advised. Also try to consult with your doctor before experimenting with the move or exercise!

At this point, I'm still at that 30-60 minutes of maximum sitting. And I still had to lay down on the bed afterwards too as the after-sitting pain was unmanageable otherwise. I still swam regularly about 2-3 times a week, but I felt it hadn't been as helpful anymore. Not happy with my progress, I began to Google around about weight-related exercise that might help with sciatica, and I came across lateral pull-down. A particular video warned me that doing it with a bad form and posture could make the sciatica worsen instead of helping. On the flip side, doing it properly can be very helpful. The guy on the video explained that the key is to do it properly: when exercising, the back had to be 'locked' in a proper posture all the time. The weight shouldn't be too much that the back had to 'overbend' in order to manage it. I considered doing it, because I had pain on my right side while my left side was generally much better. I figured that perhaps my pain was due to the discs 'bending' to the right side, which was confirmed by the MRI scan. In my own theory, lateral pull down would be useful because it would balance out the discs that were bend to the right. I thought that back-stroke swimming hadn't been anymore useful because my discs would need more pressure to balance themselves out. After all, swimming can only exert limited amount of force, right..?

I started hitting the gym 3-5 times a week for the past 2 months. There are three moves that I've always put in the routine: Mckenzie exercise, plank, and lateral pull down. Mckenzie exericse is one of the most commonly known exercises to alleviate sciatica. Plank is a core-strengthening exercise. Lower-back issue is commonly associated with the lack of core strength -- the muscles whose job is to support the lower back aren't strong enough to do the job, hence the back bone is affected, and therefore a back pain! And lateral pull down is to "fix the unbalanced discs" I theorized earlier. And you know what? I am now able to sit down for 2 hours straight without much significant pain! I feel a slight discomfort when sitting, but it is totally manageable. I also do not longer need to lay down on the bed after every sitting session. I can thank enough all the people who have shared the information to help me getting where I am now. I think I am now ready to come back to a full-time programming job :)

P.S. Again, by all mean, I'm not a medical professional nor trained in the area. I am only sharing this to spread out what I feel has been helping me out. If you ever decide to try out what I do, please first consult with your physicians.

Sunday, July 1, 2018

Ways of Using NVIDIA Graphics Card in a Linux Machine

There are different ways of using NVIDIA graphics drivers in Linux machine, such as:
1. Using it as the main GPU
Cons:
a. Significantly more power consumption compared to using Intel HD
b. Known artifacts, which is sometimes portion of the screen gets corrupted. This goes away when mouse is hovered over, but is still annoying

2. Using it with Bumblebee to get PRIME-like feature.
As most of you are probably familiar, in Windows, the default setting is that NVIDIA GPU is turned on only if it's needed. For example when you're running a game, it's turned on, but when you're browsing, editing a text-file, or something else lightweight, it's turned off to conserve the battery. In Linux, however, such feature doesn't work. The open-source community, however, created an effort to get a similar feature: Bumblebee project. With Bumblebee, Intel HD is the one that renders to the screen. NVIDIA GPU, when requested, is used to render things to a transparent layer, which is then transported into the Intel HD buffer to be rendered to the screen. Unliked PRIME feature in Windows, however, the switch between when to use NVIDIA vs. Intel is not automated. You'll have to specifically tell a program to use which GPU.
For more information: https://github.com/Bumblebee-Project/
Cons:
a. Setup can be pretty complex
b. The perf is not as good as using NVIDIA to render directly to the screen, because of the additional step of transporting NVIDIA-rendered buffer to screen buffer, which is a CPU-consuming step

3. Using nvidia-xrun to run a separate entity of X server which uses NVIDIA GPU.
This is actually my favorite, because this is very easy to setup. To give a little bit of background, in Linux, there is a component called Display Manager the job of display manager is to provide an interface for application to render stuffs onto the screen. In other words, it's a bridge between application and display hardware. And for any applications to render significant graphics into the screen, they use the API provided by X server. So the idea here is to run 2 separate X servers. The first is used to run typical lightweight workload such as text-editing, web-browsing, etc, and is backed by Intel HD drivers. The second is backed by NVIDIA drivers, hence is using the NVIDIA card, and is only run when there is a need to run an application that requires it. So this is very similar to Bumblebee project, except is much simpler.



Special thanks to:
https://wiki.archlinux.org/index.php/NVIDIA
https://github.com/Witko/nvidia-xrun
https://github.com/Bumblebee-Project/

Wednesday, June 27, 2018

Recoving Arch Linux Failed System Upgrade

Once in a while, it's nice to do a complete system upgrade in Arch Linux. All the packages will be updated to the latest bleeding edge release.

However, if for some reason the package upgrade failed, perhaps due to power outage or something unexpected. In my case, I left my system upgrading over night, and unfortunately, in the morning the screen is all black and nothing works. I tried rebooting, but even GRUB didn't show up!

Upgrading arch system


# Beware this might take a long time to run. Could be a few hours. sudo pacman -Syu

Fixing Corrupted Upgrade


1. Boot to Ubuntu live CD
2. Figure out partition where your Arch is installed
sudo fdisk -l
3. Mount the partition
sudo mount /dev/sdaX /mnt
4. Mount devices
mount -t proc /proc mnt/proc
mount --rbind /sys mnt/sys
mount --rbind /dev mnt/dev

5. Chroot to Arch, this basically is to get into Arch system using the current Ubuntu's kernel
sudo chroot /mnt
6. Now that you're inside of your Arch, you can do whatever needed to fix your installation. Below are some stuffs that might be needed
# Finish unfinished Arch upgrade.
# To see if you need this, check /var/log/pacman.log. There you'll see what happened on your last upgrade. (i.e. upgrade got interrupted in the middle) sudo pacman -Syu

# Re-create RAM disk, this creates device initialization configuration that kernel uses to boot
# For more info: https://wiki.archlinux.org/index.php/mkinitcpio
sudo mkinitcpio -p linux

# Fix GRUB installation. In my case, GRUB didn't show up on boot, so perhaps it was corrupted or something
sudo grub-mkconfig > /boot/grub.cfg # Unlike in normal environment, /boot here is mounted as RW, so write here will persist
# P.S: If you dual boot Arch with other distro, mount the other distro before running grub-mkconfig. If you do so
# GRUB will automatically detect it and create appropriate entry in bootloader

7. Reboot your system! :)
Hopefully this is helpful. Much thanks to Mort Yao's article: https://www.soimort.org/notes/170407/