# Golang package

## Background

In this post, I’ll talk about Golang package based on my learning and use experience.

You’ll learn the following topics in this post:

• How to use and import a Go package
• Demo with real-world Go package

#### What’s Go package

Simply speaking, Go package provides a solution to the requirement of code reuse, which is an important part of software engineering.

In Golang’s official document, the definition of packages goes as following:

Go programs are organized into packages. A package is a collection of source files in the same directory that are compiled together. Functions, types, variables, and constants defined in one source file are visible to all other source files within the same package.

There are several critical points in the definition let’s review them one by one.

• First, one package can contain more than one source files. This is different from other languages, for example in Javascript , each source file is an independent module that exports variables to other files to import.

• Second, all the source files for a package are organized inside a directory. The package name must be the same as the directory name.

• Third, the files inside the subdirectories should be excluded. Each subdirectory is another package.

To have a better understanding about these three points, let’s check the structure of the net package in the Go standard library.

All the .go source files directly under the net directory contain the following pacakge declaration on the top of the file:

This means that it is part of the net package.

There are several subdirectories under net directoy, and each of these subdirectory is an independant package. For example, the net/http package consists of all the files inside the http subdirectory. If you open the files inside http directory, the package declaration is:

#### Types of Go package

Generally speaking, there are two types of packages: library package and main package. After build, the main package will be compiled into an executable file. While a library package is not self-executable, instead, it provides the utility functions.

#### Member visibility of Go package

Different from other language like Javascript, Golang package doesn’t provide keyword such as export, public, private and so on to explicitly export members to the outside world.

Instead, the visibility of member inside one package is determined by the casing of the first letter. If the first letter is upper case then it can be impoted by other packages.

#### Lifecycle of package

For the library package we mentioned above, when it’s imported the init method will be called automatically. You can do some package initialization work inside it.

For the main pacakge, it must provide the main method as the entry point when it’s running.

## Use and Import Go package

Before Go Module was introduced, the development of Golang application is based on the Go workspace. In this post, I’ll focus on the solutions based on Go workspace. Go module is another topic I will talk about in a future post.

#### Go workspace

By convention, all your Go code and the code(or the packages) you import, must reside in a single workspace. A workspace is nothing but a directory in your file system whose path is stored in the environment variable GOPATH.

As a new comer into the Golang world, at the beginning the GOPATH workspace configuration confused me a lot.

For example, you want to use third-party library Consol in your application. After you run

The library is installed on your local machine. The code would be cloned on disk at $GOPATH/src/github.com/hashicorp/consul In your application, you will import this library in the following way: Thanks to the GOPATH mechanics, this import can be resolved on disk and Go tool can locate, build and test the code. Simply speaking, the package name maps to the real location of the package on your local machine. But of course, this mechanics has many limitation such as package version control, workspace constrains an so on. That’s the motivation why we need Go module. #### Ways to import Golang package Beside the default way, there are several ways to import a package based on your usage. Import as alias: this is useful when two packages have the same name. You can give any alias for an imported package as below: Import for side effect: when reading source code of popular open source projects, you can see many package import in the following way: It’s widely used when all you need from the imported package is running the init method. For example in the above case, library pq is imported in this way. You can check the source code for pq library and its init method call sql.Register method for registration as below: Internal package: this is an interesting feature to learn. Internal is a special directory name recognized by the Go tool which will prevent the package from being imported by any other packages unless both share the same ancestor directory. The packages in an internal directory are said to be internal packages. In detail you can refer to this artical. # Fabio source code study part 1 ### Background In this two-part blog series, I want to share the lessons learned from reading the soruce code of the project Fabio. In my previous blog, I shared with you how to use Fabio as a load balancing in the micro services applicatoins, in detail you can refer to this article. Since Fabio is not a tiny project, it’s hard to cover everything inside this project. I will mainly focus on two aspects: firstly in the architecture design level, I will study how it can work as a load balancer without any configuration file (Part one), and secondly in the language level, I want to summarize the best practice of writing Golang programs by investigating which features of Golang it uses and how it uses (Part two). ### Fabio architecture design Let’s start by introducing some background about Fabio. Following paragraph is from its official document: Fabio is an HTTP and TCP reverse proxy that configures itself with data from Consul. Traditional load balancers and reverse proxies need to be configured with a config file. If you’re familiar with other load balancer service such as Nginx, it will be easy for you to understand how Fabio is different and why it seems interestring. For example, if you’re using Nginx as your load balancer, you need to maintain a config file where the routing rules need to be defined as below But Fabio is a zero-conf load balancer. Cool, right? Let’s review the design and code to uncover the secrets under the hood. Simply speaking, Fabio’s design can be divided into two parts: Consul monitor and proxy. Consul monitor forms and updates a route table by watching the data stored in Consul, and proxy distributes the request to target service instance based on the route table. #### main function The main function defines Fabio’s workflow. To understand how Fabio works, we only need to focus on three points: • initBackend() and watchBackend(): these two functions contain Consul monitoring logic. • startServers(): this function is responsible to create the network proxy. #### Consul monitoring First, let’s review the initBackend function: This function is not hard to understand. Fabio supports various modes: file, static, consul and custom, and will select one mode to use based on the detailed condition inside the cfg parameter. In our case, we only need to focus on the consul mode. Next let’s review watchBackend() function to check how it keeps watching consul’s data. Firstly in line 24, we need to understand registry.Default.WatchServices(). Since initBackend function already decided we’re using Consul mode, so we need to check the WatchServices() function inside the Consul package as following: The return value is svc which is just a string typed channel. And svc channel is passed into goroutine go m.watch() as an argument. This is a very typical usage in Golang programming where two goroutines need to communicate with each other via the channel. Let’s go on and check the Watch function: You can see updates <- w.makeConfig(passing) in Line 21, it just sends a message into the channel. Another interestring point is w.client.Health().State("any", q) in line 11. This is one API provided in the consul/api package. If you check the implementation of it, you’ll find out in fact it just sends a HTTP get request to this endpoint /v1/health/state/ of Consul service which will return all the health check status for the services registered in Consul. And the above logic runs inside a for loop, in this way Fabio keeps sending request to query the latest status from Consul. If new services are discovered, then the status will be updated dynamically as well, no need to restart Fabio. So far you should understand how Fabio can work as a load balancer with any hardcoded routing config. Let’s go back to the watchBackend function to continue the analysis. After debugging, I find the message passed via the svc channel follows the following format: Next step is converting this string message into the route table. In line 46 and 51 of watchBackend function, you can find these two lines of code: Everything will be clear after you check the implementation of the route package. route.NewTable() function returns a Table type value which is map in fact. And the Table type declaration goes as following: That’s all for the consul monitor part. Simply speaking, Fabio keeps looping the latest service status from Consul and process the status information into a routing table. #### Proxy The second part is about network proxy, which is easier to understand than the first part. Fabio supports various network protocols, but in this post let’s focus on HTTP/HTTPS case. In side the main.go file, you can find the following function: The return value’s type is http.Handler, which is an interface defined inside Go standard library as following: And the actual return value’s type is proxy.HTTPProxy which is a struct implementing the ServeHTTP method. You can find the code inside the proxy package in Fabio repo. Another point needs to be mentioned is Lookup field of HTTPProxy struct: You don’t need to understand the details, just pay attention to route.GetTable() which is the routing table mentioned above. Consul monitor maintains the table and proxy consumes the table. That’s it. In this article which is part one of this blog series , you learned how Fabio can serve as a load balancer without any config files by reviewing the design and reading the source code. In part two, let’s review how Golang was used and try to summarize the best practise of wrting Golang programs. # Load balancing in Golang Cloud-Native microservice with Consul and Fabio ### Background In the last post, I show you how to do service discovery in a Golang Cloud-Native microservice application based on Consul and Docker with a real demo. In that demo, the simple helloworld-server service is registered in Consul and the helloworld-client can discover the dynamic address of the service via Consul. But the previous demo has one limitation, as I mentioned in the last post, in the real world microservice application, each service may have multiple instances to handle the network requests. In this post, I will expand the demo to show you how to do load balancing when multiple instances of one service are registered in Consul. Continue with the last post, the new demo will keep using Cloud-Native way with Docker and Docker-compose. ### Fabio for load balancing To do load balancing for Consul, there are several strategies are recommended from the Consul official document. In this post I choose to use Fabio. Fabio is an open source tool that provides fast, modern, zero-conf load balancing HTTP(S) and TCP router for services managed by Consul. Users register services in Consul with a health check and fabio will automatically route traffic to them. No additional configuration required. Fabio is an interesting project, it realizes loading balancing based on the tag information of service registration in Consul. Users register a service with a tag beginning with urlprefix-, like: Then when a request is made to fabio at /my-service, fabio will automatically route traffic to a healthy service in the cluster. I will show you how to do it in the following demo. And I will also do simple research on how Fabio realizes this load balancing strategy by reviewing the source code and share the findings in the next post. ### Fabio load balancing demo Firstly, all the code and config files shown in this post can be found in this github repo, please git checkout the load-balancing branch for this post’s demo. #### Server side For the helloworld-server, there are two changes: • First, each service instance should have an unique ID; • Second, add Tags for service registration and the tag should follow the rule of Fabio. Ok, let’s check the new version code. ##### server.go The changes are at Line 30, 32 and 40 and comments are added there to explain the purpose of the change. Simply speaking, now each service instance registers itself with a unique ID, which is consisted of the basic service name (helloworld-server in this case) and the dynamic address. Also, we add urlprefix-/helloworld Tags for each registration. urlprefix- is the default config of Fabio, you can set customized prefix if needed. Based on this Tags, Fabio can do automatic load balancing for the /helloworld endpoint. That’s all for the code change for server side. Let’s review the changes for the client. #### Client side ##### client.go Previously, we need to run serviceDiscoveryWithConsul to discover the service address to call. Now since we have Fabio working as the load balancer, so we send the request to Fabio and our request will be distributed to the service instance by Fabio. This part of logic is implemented inside the following method: To get the address of the Fabio service, we need to config it as an environment variable, which will be set in the yml file of Docker-compose. Let’s review the new yml file now. #### Docker-compose config ##### docker-compose.yml There several changes in this yml config file: • Add a new service Fabio. As mentioned above Fabio is a zero-conf load balancing, which can simply run as a docker container. This is so convenient and totally matches Cloud-Native style. The two environment variables: registry_consul_addr and proxy_strategy, are set to define the Consul’s address and the round-robin strategy. • Set the FABIO_HTTP_ADDR environment variable for the client. This is what we mentioned in the last section, which allows client.go to get Fabio service address and send requests. • Upgrade two docker images to v1.0.2. #### Demo It’s time to run the demo! Suppose you have all the docker images build on your local machine, then run the following command: This command has an important tip about Docker-compose: how to run multiple instances of certain service. In our case, we need multiple instances of helloworld-server for load balancing. Docker-compose supports this functionality with --scale option. For the above command, 3 instances of helloworld-server will be launched. You can see the demo’s result in the following image: The client repeatedly and periodically sends the request and each request is distributed by Fabio to one of the three instances in round-robin style. Just what we expect! # svg-data-vis # Service registry and discovery in Golang Cloud-Native microservice with Consul and Docker ### Background In this post, I will give a real demo application to show how to do service registration and discovery in Cloud-Native microservice architecture based on Consul and Docker. And the service is developed in Golang language. It will cover the following technical points: • Integrate Consul with Golang application for service registration • Integrate Consul with Golang application for service discovery • Configure and Run the microservices with Docker(docker-compose) As you can see, this post will cover several critical concepts and interesting tools. I will do a quick and brief introduction of them. • Cloud-Native: this is another buzzword in the software industry. One of the key attributes of Cloud-Native application is containerized. To be considered cloud native, an application must be infrastructure agnostic and use containers. Containers provide applications the ability to run as a stand-alone environment able to move in and out of the cloud and have no dependencies on any certain cloud provider. • Service Registration and Service Discovery: in the microservices application, each service needs to call other services. In order to make a request, your service needs to know the network address of a service instance. In a cloud-based microservices application, the network location is dynamically assigned. So your application needs a service discovery mechanism. On the other hand, the service registry acts as a database storing the available service instances. • Consul: Consul is the tool we used in this demo application for service registry and discovery. Consul is a member in CNCF(Cloud Native Computing Foundation). I will try to write a post to analyze its source code in the future. • Docker-compose: is a tool to run multi-container applications on Docker. It allows different containers can communicate with each other. In this post, I will show you how to use it as well. All the code and config file can be found in this github repo, please checkout the service-discovery branch for this post’s demo. ### Service registry and discovery demo To explain service registry and discovery, I will run a simple helloworld server and a client which keeps sending requests to the server every 10 seconds. The demo helloworld server will register itself in Consul, and this process is just service registry. On the other side, before the client sends a request to the server, it will first send a request to Consul and find the address of the server. This process is just service discovery. OK, let’s show some code. ##### server.go The above server.go file contains many codes, but most of them are easy, and just for setting up the server and handling the request. The interesting part is inside function serviceRegistryWithConsul. Consul provides APIs to register service by configuring the necessary information. For now, we can focus on two fields, the first one is ID which is unique for each service and we also use it for search the target service in the discovery process. The second one is Check, which means health check. Consul provides this helpful functionality. In the real microservices application, each service may have multiple instances to handle the increased requests when the concurrency is high, this is called scalability. But some instances may go down or throw exceptions, in service discovery we want to filter these instances out. Health check in Consul is just for this purpose. I will show you how to do that in the next post. ##### client.go Similarly, in the client.go file, the only key part is serviceDiscoveryWithConsul function. Based on the Consul APIs, we can find out all the services. With the target service id (in this demo is helloworld-server) which is set in the registration part, we can easily find out the address. The above parts show how to do the service registry and discovery in a completed demo. It makes use of Consul APIs a lot, I didn’t give too many explanations on that, since you can find out more detailed information in the document. In the next section, I will show you how to run this demo application in a Cloud-Native way based on Docker and Docker-compose. ### Containerization First let’s create Dockerfile for the server as following: ##### Dockerfile for server.go This part is straightforward, if you don’t understand some of the commands used here please check the Docker’s manual. I will not show the Dockerfile for the client any more, since it’s nearly the same as the above one. But you can find it in this github repo. Now we have both server and client running in containers. We need add the Consul into this application as well, and connect these 3 containers together. We do this with Docker-compose. Docker-compose is driven by the yml file. In our case, it goes as following: There are several points need to mention about docker-compose usages: • networks: we define a network called my-net, and use it in all of the 3 services to make them can talk with each other. • environment: we can set up the environment variable in this part. In our case, both server and client need to send requests to Consul for registry and discovery, right? You can check the server and client file, we didn’t set the Consul address explicitly. Since Consul do it in an implicit way, it will get the value from the environment variable named CONSUL_HTTP_ADDR. We set it up with CONSUL_HTTP_ADDR=consul:8500. • docker-compose up: this is the command all you need to launch the application. Another helpful command is docker-compose build which is used to build the image defined in the yml file. Of course, docker-compose down can stop the containers when you want to leave the application. Everything is setted up, you can verify the result both in the terminal and Consul UI as following: # Golang inter Goroutine communication - shared memory or channels ## Introduction This post will demonstrate how to do inter-thread communication in Golang concurrent programming. Generally speaking, there are two ways for this fundamental question: shared memory and message passing. You’ll see how to do these in Golang based on a case study and some tricky problems around it. ## Background Golang is a new popular and powerful programming language that aims to provide a simple, efficient, and safe way to build multi-threaded software. Concurrent programming is one of Go’s main selling points. Go uses a concept called goroutine as its concurrency unit. Goroutine is a complex but interesting topic, you can find many articles about it online, This post will not cover concepts about it in detail. Simply speaking, goroutine is a user-space level thread which is lightweight and easy to create. As mentioned above, one of the complicated problems when we do concurrent programming is that inter-thread (or inter-goroutine) communication is very error-prone. In Golang, it provides frameworks for both shared memory and message passing. However, it encourages the use of channels over shared memory. You’ll see how both of these methods in Golang based on the following case. ## Case study The example is very simple: sums a collection (10 million) of integers. In fact this example is based on this good article. It used the shared memory way to realize the communication between goroutines. I expand this example and implemented the message passing way to show the difference. ### Shared Memory Go supports traditional shared memory accesses among goroutines. You can use various traditional synchronization primitives such as lock/unlock (Mutex), condition variable (Cond) and atomic read/write operations(atomic). In the following implementation, you can see Go uses WaitGroup to allow multiple goroutines to do their tasks before a waiting goroutine. This usage is very similar to pthread_join in C. Goroutines are added to a WaitGroup by calling Add method. And the goroutines in a WaitGroup call Done method to notify their completion, while a goroutine make a call to Wait method to wait for all goroutines’ completion. In the example above the int64 variable v is shared across goroutines. When this variable needs to be updated, an atomic operation was done by calling atomic.AddInt64() method to avoid race condition and nondeterministic result. That’s how shared memory across goroutines works in Golang. Let’s go to message passing way in next section. ### Message Passing In Golang world, there is one sentence is famous: Don’t communicate by sharing memory; share memory by communicating For that, Channel(chan) is introduced in Go as a new concurrency primitive to send data across goroutines. This is also the way Golang recommended you to follow. So the concurrent program to sum 10 million integers based on Channel goes as below: To create a typed channel, you can call make method. In this case, since the value we need to pass is an integer, so we create an int type channel with c := make(chan int). To read and write data to that channel, you can use <- operator. For example, in the add goroutine, when we get the sum of integers, we use c <- v to send data to the channel. To read data from the channel in the main goroutine, we use a build-in method range in Golang which can iterate through data structure like slice, map and channel. That’s it. Simple and beautiful. #### Hit the Deadlock Let’s build and run the above solution. You’ll get an error message as following: The deadlock issue occurs because of these two reasons. Firstly by default sends and receives to a channel are blocking. When a data is send to a channel, the control in that goroutine is blocked at the send statement until some other Goroutine reads from the channel. Similarly when data is read from a channel, the read is blocked until some Goroutine writes data to that channel. Secondly, range only stops when the channel is closed. In this case, each add Goroutine send only one value to the channel but didn’t close the channel. And the main Goroutine keeps waiting for something to be written (in fact, it can read 4 values, but after that it doesn’t stop and keep waiting for more data). So all of the Goroutines are blocked and none of them can continue the execution. Then hit a deadlock. #### Fix the Deadlock Let use the manual for loop, in each iteration we read the value from the channel and sum together. Run it again. The deadlock is resolved. # Hack operating system by xv6 project ### Background In this post, I want to introduce xv6 which is a “simple, Unix-like operating system”. xv6 is not only an open-source project, but also it is used for teaching purposes in MIT’s Operating Systems Engineering(6.828) course as well as many other institutions. If you’re like me, always want to learn operating system, I guess you’ll face a very steep learning curve since operating system is complex. For learning purpose, we need an operating system project which is not too complex and not too simple. Luckily, xv6 is just for this purpose which is simple enough to follow up as an open source project, yet still contains the important concepts and organization of Unix. ### Resource Since xv6 is an open source project, you can easily find many resources online. Personally I recommend to use this page, which is the latest teaching resource for the corresponding MIT course. You can find source code, examples, slides and videos there. Very helpful! ### Environment setup for xv6 project On the MIT course’s document, there are many solutions for setting up the xv6 development environment. You can follow those solutions, it will work. For your convenience, I make a Dockerfile to build the docker image which contains all the necessary dependencies to keep working on xv6. The Dockerfile goes as following: # Understand stack memory management ### Table of Contents First I have to admit that memory management is a big( and important) topic in operating system. This post can’t cover everything related to it. In this post, I want to share something interesting I learned about stack memory management. Especially understand how the stack frame of function call works, which’s the most important part of stack memory. I will explain the mechanism in detail with examples and diagrams. Briefly speaking, the contents of this post is: • Memory layout of a process • Stack memory contents • CPU register related to stack memory management • Stack frame of function call and return mechanism To understand the concepts and machanisms deeply, a few assembly code will be shown in this post, you don’t have to know assembly lauguage as an expert for reading this post, I will add comments for these assembly codes to make explain what’s their functionalities. ### Stack Memory Management Basic #### Memory Layout of Process Before we talk about Stack memory management, it’s necessary to understand memory layout of a process. When you create a program, it is just some Bytes of data which is stored in the disk. When a program executes then it starts loading into memory and become a live entity. In a more specify way, some memory (virtual memory) is allocated to each process in execution for its usage by operating system. The memory assigned to a process is called process’s virtual address space( short for VAS). And memory layout use diagrams to help you understand the concept of process virtual address space easily. There are some awesome posts on the internet about memory layout. And the most two critical sections are: Stack and Heap. Simply speaking, Stack is the memory area which is used by each process to store the local variables, passed arguments and other information when a function is called. This is the topic of this post, you’ll know more detailed in the following sections. While Heap segment is the one which is used for dynamic memory allocation, this is another more complex topic out of this post’s scope. #### Stack Memory Basics Stack Memory is just memory region in each process’s virtual address space where stack data structure (Last in, first out) is used to store the data. As we mentioned above, when a new function call is invoked, then a frame of data is pushed to stack memory and when the function call returns then the frame is removed from the stack memory. This is Stack Frame which represent a function call and its argument data. And every function has its own stack frame. Let’s say in your program there are multiple functions call each other in order. For example, main() -> f1() -> f2() -> f3(), main function calls function one f1(), then function one calls function two f2(), finally function two calls function three f3(). So based on the Last in first out rule, the stack memory will be like as below: Note: the top of the stack is the bottom part of the image. Don’t feel confused for that. #### Stack memory contents After understand stack frames, now let’s dive deep into each stack frame to discover its contents. Each stack frame contains 4 parts: • Parameter passed to the callee function • Return address of the caller function • Base pointer • Local variables of the callee function Caller and callee is very easy to understand. Let’s say main() -> f1(), then caller function is main() while callee function is f1(). Right? For the above 4 four parts, there are something need to emphasis. Firstly the size of return address of the caller function is fixed, for example, in 32 bits system the size is 4B. And Base pointer size is fixed as well, it’s also 4B in a 32 bits system. This fixed size is important, you’ll see the reason in following sections. Next, in the above diagram you see two kinds of pointers: base pointer and stack pointer. Let’s check them one by one. Stack pointer: pointer to the top of the stack. When stack adds or removes data then stack pointer will change correspondingly. Stack pointer is straight forward and not difficult to understand, right? Base pointer: is also called frame pointer which points to the current active frame. And the current active frame is the topmost frame of the stack. The base pointer is conventionlly used to mark the start of a function’s stack frame, and the area of the stack managed by that function. As we mentioned above, since the size of return address and base pointer is fixed, so based on the address of base pointer you can get all the data in that stack frame. Local variables can be accessed with positive offset and passed parameters can be got with negative offset. That’s the reason why it is called base pointer. Great design, right? The other thing need to discuss is what’t the content of base pointer part in each stack frame? In the above diagram you see that a 4 bytes data is pushed into the stack, we call it base pointer. But what’s the data? In fact, base pointer is designed to store the caller's base pointer address. This is also a smart design to make function return works well. We’ll discuss more about it later. #### CPU register To understand stack memory management, you’ll need to know 3 interesting CPU register: • eip: Instruction pointer register which stores the address of the next instruction to be run. • esp: Stack pointer register which stores the address of the top of the stack at any time. • ebp: Base pointer register which stores base pointer address of callee’s stack frame. And the content at this address is the caller’s base pointer value (we already mentioned this point above). Until now, you see all the necessary stuff: stack frame, stack frame content and CPU registers. Let’s see how they play together to make stack frames of function call and return works. You’ll see how this simple but beautiful design realizes such a complex task. ### Mechanism of function call and return In this section, you’ll understand how function call and return works by reading a few assembly codes (which are not difficult to understand). #### function call Step one: as you already see in the above diagram the first part of each stack frame is the passed parameters to the callee. So all the arguments are pushed on the stack as the following codes shows: push is the assembly instruction to push data onto the stack. And usually the arguments are pushed to the stack in the reverse order that they’re declared in function. Step two: the second part is the Return address of the caller fn, so we need to push the address of next instruction in caller function as Return address in callee’s stack frame. As we introduced in the last section, the address of next instruction will be stored in EIP register, right? The assembly code goes as following: Step three: upon entrying to the callee function, the old EBP value (the caller function’s base pointer address) is pushed onto the stack and then EBP is set to the value of ESP. Then the ESP is decremented (since the stack grows downward in stack memory) to allocate space for the local variables. And the codes goes as following: mov - the mov instruction copies the data item referred to by its second operand into the location referred to by its first operand. So mov %ebp %esp just means set EBP a new value of ESP. Please note that, ESP value changes when data is pushed or popped onto/from the stack. But it’s always points to the top of the stack. Before this mov %ebp %esp instruction, ESP is pointing to the address just after the return address of the caller, and it should be the address of callee’s base pointer and just what EBP should store, this instruction makes sense, right? From that on, during the execution of the callee function, the passed arguments to the function are located at the positive offsets from EBP, and the local variables are located at the negative offsets from EBP, you already see this conclusion above. Inside a function, the stack would look like this: #### function return Step one: upon exit from the callee function, all the function has to do is set ESP to the value of EBP. In this way can simply deallocates/releases the local variables from the stack. And then it can also expose callee’s base pointer on the top of the stack for next step operation. This instruction is restoring the saved value of ESP upon entering the function, at that point what we did is mov ebp esp. Smart design, right? Step two: since ESP already reset to the address of base pointer, next step we can simply pop the old EBP value from the top of stack as following: pop instruction retrieves the topmost value from the stack and puts the value into the second operand, in this case is EBP. Remember the callee function’s base pointer stores the caller function’s base pointer, so this simple pop ebp instruction realize EBP register restore perfactly. Great design, right? Step three: next is straight forward, we need to pop the caller function return address to EIP. Similar to step two, right? Now the system knows the next instruction (pointing to the return address in the caller function) need to run. The execution context is giving back to the caller function. Upon returning back to the caller function, it can then increase ESP register again to remove the passed function arguments it pushed onto the stack. At this point, the stack frames becomes the same as it was in prior to invoking the callee function. # Use Docker container for local C++ development ### Why develop in Docker container? Docker is the hottest technology to deploy and run the software. The key benefit of Docker is that it allows users to package an application with all of its dependencies into a standardized unit for software development. Compared with virtual machines, containers do not have high overhead and hence enable more efficient usage of the underlying system and resources. Besides deploying and running applications, Docker containers can also make your local development work easier, especially when you need to set up a specific environment with many dependencies. In my case, I have a project which is a C++ application running on the Linux platform. But on personal machines I’m running macOS and Windows systems, I didn’t install Linux platform on my computer. Before I start working on this project, I need to fix the platform/environment issue. The first idea is, of course, using virtual-machines with VirtualBox and install the Linux system in it. This process will be time-consuming and tedious. So I decided to use Docker container to speed up the environment configuration step. I will share the experience step by step. The whole process is lightweight and quick, also can practice your Docker related skills. ### Create the Docker container To build a Docker image, we need a Dockerfile, which is a text document(without a file extension) that contains the instructions to set up an environment for a Docker container. The Docker official site is the best place to understand those fundamental and important knowledge. In my case the basic Dockerfile goes as following: FROM : the first part is the FROM command, which tells us what image to base this off of (as we know, Docker is following a multi-layer structure). In my case, it’s using the Ubuntu:20.04 image, which again references a Dockerfile to automate the process. ARG: ARG instruction defines a variable that can be passed at build time to pass environment info. In this case, just to disable the console output during the following Linux package installation process. RUN: the next set of calls are the RUN commands. This RUN instruction allows you to install your application and packages for it. RUN executes commands in a new layer and creates a new layer by committing the results. Usually, you can find several RUN instructions in a Dockerfile. In this case, I want to install the C++ compiler and build tools (and some other specific dependency packages for development) which is not available in the Ubuntu base image. CMD: the last instruction CMD allows you to set a default command, which will be executed when you run the container without specifying a command. If Docker container runs with a command, this default one will be ignored. With this Dockerfile, we can build the image with the next Docker command: This will build the desired Docker image tagged as linux-cpp. You can list(find) this new image in your system with command docker images: Now you can run the docker container with the newly build linux-cpp image: ### Mount source code into container Follow the above steps, you can have a running Docker container with C++ development dependencies in Linux environment. Next, you can directly start your C++ program inside the container, then build and run your code. But if you just put your program inside the container, you will have a high risk to lose your code when the container is deleted. The better way is placing your program source code on your local machine and sync the codes into the container as you program. This is where mount can help you. Mount and Volume is an important topic for Docker, you can find some posts for deeper introduction. In my case, I can realize my target with the following command: the key and interesting part is: -v${PWD}:/develop, this will mount the current directory of the host machine into the develop directory inside the container. If develop directory is not there, Docker will make it for you.

Note: the current directory pwd command’s usage has some variants based on your host machine. The above command works for the Powershell of Windows, if you are using git bash on Windows, please try:

For Mac users, try the following:

Now you can happily code your program in your familiar host machine, save the code change and sync them into the container. Then build and run your code inside the container with every dependency it needs.

# Understand NgRx memoizedSelector in source code level

### Background

Selector is an essential part of the entire NgRx state management system, which is much more complicated than the action and reducer part based on my learning and development experience. Sometimes I feel it is like a black box, which hides many excellent designs and techniques. I spend some time and dig into the source code of NgRx to take a look at the internals of the black box. This post (and later posts) will share some interesting points I found during the process.

When using NgRx, developers always do something like this:

createSelector method return a selector function, which can be passed into the store.select() method to get the desired state out of the store.

By default, the type of the returned function from createSelector is MemoizedSelector<State, Result>. Have you ever notice that? This post will introduce what it is and how it works.

### What is memoization?

Memoization is a general concept in computer science. Wikipedia explains it as follows:

In computing, memoization or memoisation is an optimization technique used primarily to speed up computer programs by storing the results of expensive function calls and returning the cached result when the same inputs occur again.

You can find many articles online for explaining memoization with code examples. Simply speaking, a hash map is used for the cached result. Technically it’s not difficult at all.

Memoization is a great optimization solution of pure function. Generally speaking, A pure function is a function where the return value is only determined by its input values, without side effects.

As you may know, Selector is a pure function. memoizedSelector is just a normal selector function with memoization optimization. Next, let’s see how it works in the design of the NgRx library.

### Source code of memoizedSelector

In the source code of NgRx, you can find the selector related code in the path of platform/modules/store/src/selector.ts.

selector.ts file is roughly 700 lines, which hold all the functionalities of it. There are many interesting points inside this module, which I can share in another article, but this post focus on memoization. So I pick up all the necessary code and put it as follows:

There are many interesting Typescript stuff in the above code block. But for memoization, you can focus on the method defaultMemoize. In the following section, I will show you how it can make your program run faster.

### Explore the memoizedSelector method

To show how memoization works, I create a simple method slowFunction as following, to simulate that a method running very slowly:

And then test it with the following scripts:

The output goes as folllowing:

Compared with the original slowFunction method, the memoized method fastFunction can directly output the result for the same input. That’s the power of memoization, hope you can master it.