Sunday, April 23, 2017

Cloud Native Era - gRPC & Protocol Buffer

Further to my previous blog on GO, I started investigating further - best way to write APIs. I am convinced, moving further there will not be monopoly of one language. The onus will be on us to pick language that best solves the problem. We should widen our horizon and should think out of box.

REST has redefined the way we have been interacting with an application. But REST is not an answer to all problems in this world of distributed computing (cloud), where APIs are distributed - coded in different languages (Agile world). Consider an example of a PSP (Payment Service Provider) or Insurance system. Systems like these are built using micro services (REST APIs) i.e are broken into self contained APIs. Excellent!!! But the real business of PSP or Insurance lies in calling these APIs to implement business processes (payment engine, claim processing etc ...) . This business process generate revenue/money for a company.  Its very difficult to model these business flows in REST APIs (noun, verb) paradigms for reasons explained in blog - Why we have decided to move our APIs to gRPC

 We are in the era of cloud-native applications where Microservices should be able to massively scale and performance is utmost critical. Plus its a need of the hour to have a high performance communication mechanism to communicate between various Microservices. Then the big question is whether JSON based APIs provide high performance and scalability power which is required for modern applications. Is JSON really a fast data format for exchanging data between applications?. Is RESTful architecture capable of building complex APIs?. Can we easily build a bidirectional stream APIs with RESTful architecture?. The HTTP/2 protocol provides lots of capability than its previous version, thus it's high time we leverage these capabilities when building next-generation APIs. Let's investigate how gRPC and protocol buffers helps in it. 

Protocol Buffers, also referred as protobuf, is Google’s language-neutral, platform-neutral, extensible mechanism for serializing structured data. Protocol Buffers are smaller, faster, and simpler that provides high performance than other standards such as XML and JSON.

gRPC is a modern, open source remote procedure call (RPC) framework that can run anywhere. It enables client and server applications to communicate transparently, and makes it easier to build connected systems. gRPC is developed by Google. gRPC using HTTP 2.0 hence follows HTTP semantics. It supports both sync and async communication model. It supports classical Request/Response model and my personal favorite bidirectional streams.  gRPC is capable of supporting full-duplex streaming. As a result, both client and server applications can send stream of data asynchronously.

By default, gRPC uses Protocol Buffers as the Interface Definition Language (IDL) and as its underlying message interchange format. But IDL is pluggable and different IDL can be plugged in. Unlike traditional JSON and XML, Protocol Buffers are not just message interchange format, it’s also used for describing the service interfaces (service endpoints). Thus Protocol Buffers are used for both the service interface and the structure of the payload messages. In gRPC, one can define services and its methods along with payload messages. Like a typical communication between a client application and a RPC system, a gRPC client application can directly call methods on a remote server as if it was a local object in a client application.

Now the real fun - let's get our hands in mud - gRPC + proto + GO/Java :-) to demonstrate concepts we just learnt, by building "Greeter" service. I am becoming polyglot programmer :-)

A typical gRPC service can be built in three steps as shown below:




Step 1: Define Service Definition (proto) & Generate

The IDL (proto) for "Greeter" looks like:

For GO language

syntax = "proto3";
package greeter;
// The greeting service definition.
service Greeter {
    // Sends a greeting
    rpc SayHello (HelloRequest) returns (HelloReply) {}
}
// The request message containing the user's name.
message HelloRequest {
    string name = 1;
}
// The response message containing the greetings
message HelloReply {
    string message = 1;
}

For Java language

syntax = "proto3";

option java_multiple_files = true;
option java_package = "com.gmalhotr.grpc.greeter.api";
option java_outer_classname = "HelloWorld";
option objc_class_prefix = "GREETER";

package greeter;
// The greeting service definition.
service Greeter {
    // Sends a greeting
    rpc SayHello (HelloRequest) returns (HelloReply) {}
}
// The request message containing the user's name.
message HelloRequest {
    string name = 1;
}
// The response message containing the greetings
message HelloReply {
    string message = 1;
}


Generate Server and Client using protoc (for go) and using maven Protocol Buffer Plugin 

For this application, I chose Intellij. Following picture shows above artifacts in the project layout:


Step 2: Create Server and Client

Server in GO


package main

import (
"log"
"net"
"golang.org/x/net/context"
"google.golang.org/grpc"
pb "../api"
"google.golang.org/grpc/reflection"
)
const (
port = ":50051"
)
// server is used to implement interfacedef.GreeterServer.
type server struct{}
// Define method for a server - SayHello implements api.GreeterServer
func (server *server) SayHello(ctx context.Context, in *pb.HelloRequest) (*pb.HelloReply, error) {
return &pb.HelloReply{Message: "Warm greeting " + in.Name + " ...."}, nil
}
func main() {
lis, err := net.Listen("tcp", port)
if err != nil {
log.Fatalf("failed to listen: %v", err)
}
s := grpc.NewServer()
pb.RegisterGreeterServer(s, &server{})
// Register reflection service on gRPC server.
reflection.Register(s)
if err := s.Serve(lis); err != nil {
log.Fatalf("failed to serve: %v", err)
}

}


Server in Java


package com.gmalhotr.grpc.helloworld.server.impl;
import com.gmalhotr.grpc.greeter.api.GreeterGrpc;
import java.io.IOException;
import java.util.concurrent.CompletableFuture;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import io.grpc.Server;
import io.grpc.netty.NettyServerBuilder;
public class TheGreeterServer {
    private static final Logger LOG = LoggerFactory.getLogger(TheGreeterServer.class);
    private final int port = 12345;
    private Server server;
    /**
     * Main launches the server from the command line.
     */
    public static void main(String[] args) throws IOException, InterruptedException {
        final TheGreeterServer server = new TheGreeterServer();
        server.start();
        server.blockUntilShutdown();
    }

    private void blockUntilShutdown() {
        if (server != null) {
            server.shutdown();
        }
    }
    private void start() {
        try {
            server = NettyServerBuilder
                    .forPort(port)
                    .addService(GreeterGrpc.bindService(new TheGreeter()))
                    .build()
                    .start();
            LOG.info("Server started, listening on {}", port);
            CompletableFuture.runAsync(() -> {
                try {
                    server.awaitTermination();
                } catch (InterruptedException ex) {
                    LOG.error(ex.getMessage(), ex);
                }
            });
            Runtime.getRuntime().addShutdownHook(new Thread() {
                @Override
                public void run() {
                    // Use stderr here since the logger may have been reset by its JVM shutdown hook.
                    System.err.println("*** shutting down gRPC server since JVM is shutting down");
                    TheGreeterServer.this.stop();
                    System.err.println("*** server shut down");
                }
            });
            System.err.println("*** server started");
        } catch (IOException ex) {
            LOG.error(ex.getMessage(), ex);
        }
    }
    private void stop() {
        if (server != null) {
            server.shutdown();
        }
    }
}

Below is the implementation for the "TheGreeter" which sends greeting to client:

package com.gmalhotr.grpc.helloworld.server.impl;
import com.gmalhotr.grpc.greeter.api.GreeterGrpc;
import com.gmalhotr.grpc.greeter.api.HelloReply;
import com.gmalhotr.grpc.greeter.api.HelloRequest;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import io.grpc.stub.StreamObserver;
public class TheGreeter implements GreeterGrpc.Greeter { // extends GreeterGrpc.AbstractGreeter implements BindableService {
    private static final Logger LOG = LoggerFactory.getLogger(TheGreeter.class);
    @Override
    public void sayHello(HelloRequest request, StreamObserver responseObserver) {
        LOG.info("sayHello endpoint received request from " + request.getName());
        HelloReply reply = HelloReply.newBuilder().setMessage("Hello " + request.getName()).build();
        responseObserver.onNext(reply);
        responseObserver.onCompleted();
    }
}

Below picture shows Server and Client in the project layout in IDE:



Step 3: Execute

Start the Server first.  Run the Client and following output will appear:

2017/04/23 16:58:07 Greeting: Warm greeting gaurav ...

Conclusions
gRPC provides a very simple mechanism to perform RPC in language and platform neutral way. Its very simple (unlike EJB, CORBAs, Webhook .... etc...) and runs using HTTP 2.0. As the saying goes - seeing is believing.  Give a look at site -  HTTP 2 Demo, to witness performance boost using HTTP 2.  With the rise of Cloud (distributed computing), I strongly believe, the future lies in polyglot programming. Use the language which best solves the problem.  gRPC can be used to implement complex flows. 

Also it's worth mentioning here that amount of code that I had to write in GO for Greeter was way less as compared to Java.


Sunday, April 16, 2017

REST API Using GO

Recently I have been working on developing strategy/vision for products. Micro Services based architecture is a clear winner in developing any new strategy for the reasons which are quite obvious. The objective of this blog is not to explain advantages of Micro Services based architecture but explore a programming vision which can last next decade.

I have been working with Java for 18 years. Obviously, I started thinking about Java while developing this vision. In Java world, in order to build a simple REST API, I need at least the following:
  •         Webserver - Tomcat or JBoss or Weblogic etc
  •        Jersey (for implementing REST API)
  •        Java 
  •        Other dependent jars

I started asking myself a simple question, why one has to learn WebServer, Jersey etc to write an API. I can obviously use Spring Boot which can help me abstract above mentioned dependencies to a certain extent, but still, why???

After a while, I told myself to take a few steps back and start fresh. Throw all the biasness towards Java. Flush out all the Martin Fowler blogs thatI learnt over the years J … etc etc …

As a developer, I wish to write APIs in an easy mechanism with fewer dependencies on the eco system surrounding it.  I started looking into Google Go programing language. I found this language easy to learn. It’s a small and self-contained language that runs really fast.

In this blog, I will explain, how to write REST APIs using GO.  Let’s assume we wish to write Person API, that performs the following:
  1. Get all persons
  2. Get person by id
  3. Create a person
  4. Delete a person

Person APIs in Go is essentially is a four step process, as shown pictorially below:
STEP 1: Define Data (Structures)

In GO a person (APerson) with an address (AnAddress) can be modelled as shown below:

type APerson struct {

       ID        string   `json:"id,omitempty"`

       Firstname string   `json:"firstname,omitempty"`

       Lastname  string   `json:"lastname,omitempty"`

       AnAddress  *AnAddress `json:"address,omitempty"`

}

type AnAddress struct {
       City  string `json:"city,omitempty"`
       State string `json:"state,omitempty"`
}

NOTE: Go has in built support for Json, as a result we can directly bind modelled data element to Json type.
NOTE: For the sake of simplicty, I have not included DB interactions in this blog.

STEP 2: APIs Definition

1) GetAllPersons – Implementation for API which fetches all the persons

func GetAllPersons(w http.ResponseWriter, req *http.Request) {
       json.NewEncoder(w).Encode(persons)
}

2) GetAPersonById – Implementation of API that fetches a person by Id

func GetAPersonById(w http.ResponseWriter, req *http.Request) {
       params := mux.Vars(req)
       for _, item := range persons {
              if item.ID == params["id"] {
                     json.NewEncoder(w).Encode(item)
                     return
              }
       }
       json.NewEncoder(w).Encode(&APerson{})
}

3) CreateAPerson – Implementation of API that creates a person

func CreateAPerson(w http.ResponseWriter, req *http.Request) {
       params := mux.Vars(req)
       var aPerson APerson   _ = json.NewDecoder(req.Body).Decode(&aPerson)
       aPerson.ID = params["id"]
       persons = append(persons, aPerson)
       json.NewEncoder(w).Encode(aPerson)
}

4) DeleteAPerson  - Implementation of API that deletes a person

func DeleteAPerson(w http.ResponseWriter, req *http.Request) {
       params := mux.Vars(req)
       for index, item := range persons {
              if item.ID == params["id"] {
                     persons = append(persons[:index], persons[index+1:]...)
                     break
              }
       }
       json.NewEncoder(w).Encode(persons)
}

STEP 3: Create a Router

A router registers routes to be dispatched to a matched function

  router := mux.NewRouter()
  router.HandleFunc("/persons", GetAPerson).Methods("GET")
 router.HandleFunc("/persons/{id}", GetAPersonById).Methods("GET")
  router.HandleFunc("/persons/{id}", CreateAPerson).Methods("POST")
   router.HandleFunc("/persons/{id}", DeleteAPerson).Methods("DELETE")

STEP 4: HTTP Listen and Serve

Start a HTTP server listening at port 8080 to handle all the requests/routes defined above.

http.ListenAndServe(":8080", router)
 
[
  {
    "id": "1",
    "firstname": "Gaurav",
    "lastname": "Malhotra",
    "address": {
      "city": "Utrecht",
      "state": "NL"
    }
  },
  {
    "id": "2",
    "firstname": "Shikha",
    "lastname": "Malhotra"
  }
]


Summary – Complexity to Simplicity

Go language provides very simple mechanism to write REST API with least dependencies. As a developer I can focus on developing my APIs and I don’t have to learn dependencies ecosystem around it. GO directly produce machine language, hence runs extremely fast. After writing my first API using GO, I see GO has bright future with ever so fast growing open source community. It can help in developing Micro Service based architecture easily. Also GO has built in garbage collection which makes it easy to manage memory. Last but not the least, GO has powerful support for concurrent execution (goroutines). Last but not the least, with least dependencies, it's very easy to write unit test & integration tests. It's worth a look!!! 

Sunday, April 9, 2017

Caching Strategy

Why Cache Objects?


Object caching allows applications to share objects across requests and users, and coordinates the objects' life cycles across processes. By storing frequently accessed or expensive-to-create objects in memory, object caching eliminates the need to repeatedly create and load data. It avoids the expensive reacquisition of objects by not releasing the objects immediately after their use. Instead, the objects are stored in memory and reused for any subsequent client request.

Advantages of Caching


One of the main benefits of object caching is the significant improvement in application performance. In a multitier-ed application, data access is an expensive operation compared to other tasks. By keeping frequently accessed data and not releasing it after its first use helps in avoiding the cost and time required for the data's re-acquisition and release.

Following are the benefits of caching:

  •        Performance — Fast access of frequently used resources is an explicit benefit of caching. Therefore, when the same resource needs to be accessed again, the resource need not be acquired or fetched from somewhere; it is already available.
  •        Scalability — caching by its nature is implemented by keeping hold of frequently used resources and not releasing them. It hence avoids the cost of acquiring (frequently used) resources and their release, which has a positive effect on the scalability.

Disadvantages of Caching


Object caching also includes a few disadvantages. Following are the key disadvantages of caching:-

  •       Synchronization Complexity — Depending on the kind of resource, complexity increases because consistency between the state of the cached resource and the original data, which the resource is representing, needs to be ensured.
  •        Durability — Changes to the cached resource can be lost when the system crashes. However, if a synchronized cache is used or cache coordination, then this problem can be avoided.
  •        Footprint — The run-time footprint of the system is increased as possibly unused resources are cached. However, if an Evictor is used, then the number of such unused cached resources can be minimized.




Caches Classification


 Data-usage predictability influences the caching strategy. Based on the same, cache can be classified as:-

  •         Primed Cache: - The primed-cache pattern is applicable when the cache or part of the cache can be predicted in advance. This pattern is very effective in dealing with static. These resources are prefetched and stored in cache during startup of application to give better performance/response time like loading of the web pages (UI).
  •        Demand Cache: - The demand cache is suitable when the future resource demand cannot be predicted. The calling module will acquire and store the resource in the cache only when it is needed. This optimizes the cache and achieves a better hit-rate. As soon as the resource is available, it is stored in the demand cache. All subsequent requests for the resource are satisfied by the demand cache

NOTE: The primed cache is populated at the beginning of the application (prefetched/cache warmed), whereas the demand cache is populated during the execution of the application.

The caches can be further broadly classified into the following two types which can in turn, fall either in the category of a primed or demand cache depending on the data-usage predictability:-
1.       ORM cache
2.       In-process cache

ORM/JPA Cache


ORM/JPA framework like Hibernate, EclipseLink,  is a way to bridge the impedance mismatch between objects oriented programming (OOP) and relational database management systems (RDBMS). The ORM’s  (JPA) cache can be layered into two different categories: the read-only shared cache used across processes, applications, or machines and the updateable write-enabled transactional cache for coordinating the unit of work. ORM uses layered architecture to implement two-level caching architecture, the first layer represents the transactional cache and the second layer is the shared cache designed as a process or clustered cache.



Transactional Cache


Entities formed in a valid state and participating in a transaction are stored in the transactional cache. Transactions are characterized by their ACID (Atomicity, Consistency, Isolation, and Durability) properties. Transactional cache demonstrates the same ACID behavior. Transactions are atomic in nature; each transaction will either be committed or rolled back. When a transaction is committed, the associated transactional cache will be updated and synched with the shared cache (explained in the next section). If a transaction is rolled back, all participating objects in the transactional cache will be restored to their pretransaction state

Shared Cache


The shared cache can be implemented as a process cache or clustered cache. A process cache is shared by all concurrently running threads in the same process. A clustered cache is shared by multiple processes on the same machine or by different machines. Distributed-caching solutions implement the clustered cache.
Entities stored in the transactional cache are useful in optimizing the transaction. As soon as the transaction is over, they can be moved into the shared cache. All read-only requests for the same resource can be fulfilled by the shared cache; and, because the shared cache is read-only, all cache coherency problems are easily avoided. For the read-only request there will be no coherency problem as the queries will be executed against the shared cache and transactional cache will not be synchronized with the shared cache.
The shared cache can be implemented using distributed cache. But distributed caches add overheads like serialization/serialization costs along with network traffic to keep the backup copy of data in case of failure. Distributed cache also adds deployment overheads. There could be another strategy to synchronize cache, called Cache Coordination. The cache coordination mechanism works as follows:
1.       Detect a change to an entity
2.       Relay only the name of entity along with its identification i.e. primary key to other nodes. (In case JPA/ORM is used as persistence layer, it’s easy to use JPA callback hooks to relay this information)
3.       Once the notification is received just invalidate shared cache for that entity (using primary key)










Decide What to Cache?


Caching the right data is the most critical aspect of caching. If we fail to get this right, we can end up reducing performance instead of improving it. We might end up consuming more memory and at the same time suffer from cache misses, where the data is not actually getting served from cache but is refetched from the original source.
Cacheable Data Classification In components the data that requires caching can be classified into the following:-

  •         Truly Static Data
  •          Mostly Static Data
  •         In Flight Static Data
  •         In Process Data

Truly Static Data


 The data which falls under this category is nonvolatile data i.e. it rarely changes. As a result such data should reside within JVM caches and these caches should never be evicted.
Recommendation – Local Cache (L2 Cache) (like guava caches, never expiring caches)

Mostly Static Data


 The data which falls in this category is also relatively nonvolatile, i. e. data very rarely changes. As a result such data should reside within the JVM caches and these caches will be invalidated for the changed data using ORM/JPA callback hooks.  (refer to cache coordination mechanism above)
Recommendation – Local Cache (L2 cache) + Cache Coordination

In Flight Static Data


The data which falls under this category is volatile by nature but no two threads simultaneously access these. It's worth mentioning here that the object graph of these kinds of data is very huge and take ample amount of time during the ORM’s object construction phase. These objects need to be evicted as soon as they are not in use (dereferenced) hence the cache type SoftWeak is recommended (along with cache coordination)

Recommendations:
1.       LocalCache (L2 cache) with SoftWeak references + Cache Coordination
2.       NO CACHING


In Process Data


The data which falls under this category is also very static but is constructed during the implementation of a functional algorithm like creation of a process derivation rule cache (stored the  entity constructed using complex native SQL). The data like these falls in the "In Process Data" and should live in "In-Process Cache"
Recommendation: Locale Cache (Query Cache) + Cache – Coordination

Chasing the Right Size Cache


There is no definite rule regarding the size of the cache. Cache size depends on the available memory and the underlying hardware and memory settings. There are two possible approaches which can be used to set the size of the caches : -
  •            Approach 1: - An effective caching strategy is based on the Pareto principle2 (that is, the 80–20 rule).
  •        Approach 2: -Size of the cache = Number of rows for that entity in the database at start of an application.

Cache Taxonomy

Below picture shows cache taxonomy







Sunday, March 28, 2010

DSL using Groovy

Last night, I was playing with dynamic language groovy to write my very own DSL (domain specific language). In this blog I will explain how to write DSL using groovy.

This blog assumes the programming knowledge of groovy and underlying concept where & why DSL should be used.

Lets starts with my aim. My aim is to write dsl like “take 1 pill of disprin after 10 hours”. 

More specifically,

take 1.pill,
    of: disprin,
    after: 10.hours


Write the above script in IDE of your choice. Try to execute the above script.  After execution, following error will result

Caught: groovy.lang.MissingPropertyException: No such property: disprin ..

Quiet logical. Groovy cannot understand “disprin”. Let's add dynamically the property – disprin

this.metaClass.getProperty= {name->
  def metaProperty= this.metaClass.getMetaProperty(name)
    //'getMetaProperty' gets property, which may be an added or an existing one
  metaProperty? metaProperty.getProperty(delegate): "" + delegate
}


Let’s run the script again. After running the script, following exception came

Caught: groovy.lang.MissingPropertyException: No such property: hours for class: java.lang.Integer

The above error tells that, groovy cannot find the property hours in the class Integer. Groovy can magically add the properties to any java class and any jdk class as shown below

Number.metaClass.getHours = { -> new Integer(delegate          )
            println ("getHours delegate >> " + delegate)
}

After running the script again, the following error came

Caught: groovy.lang.MissingPropertyException: No such property: pill for class: java.lang.Integer


Lets add the property – pill to the Integer



Number.metaClass.getPill = { -> new Integer(delegate          )
            println ("getPill delegate >> " + delegate)
}

Let run the exception again; now the following exception came
Caught: groovy.lang.MissingMethodException: take()

Finally, lets add the missing method – take. The missing method can added dynamically at runtime as shown below

metaClass.invokeMethod= {name, args->
            def metaMethod= this.metaClass.getMetaMethod(name, args)
            //'getMetaMethod' gets method, which may be an added or an existing one
            // Call out to missing method can be handled here
            metaMethod? metaMethod.invoke(delegate,args): name
}


Hence the complete script looks like

this.metaClass.getProperty= {name->
  def metaProperty= this.metaClass.getMetaProperty(name)
    //'getMetaProperty' gets property, which may be an added or an existing one
  metaProperty? metaProperty.getProperty(delegate): "" + delegate
}

Number.metaClass.getHours = { -> new Integer(delegate          )
}

Number.metaClass.getPill = { -> new Integer(delegate          )
}

metaClass.invokeMethod= {name, args->
            def metaMethod= this.metaClass.getMetaMethod(name, args)
            //'getMetaMethod' gets method, which may be an added or an existing one
            // Call out to missing method can be handled here
            metaMethod? metaMethod.invoke(delegate,args): name
}

/*
 * Just to test
 * println (take (1.pill,of:disprin,after: 6.hours))
 */

take 1.pill,
     of:disprin,
     after:6.hours

Conclusion

            With groovy it’s very simple to create a DSL. Also the groovy specify code mentioned below

this.metaClass.getProperty= {name->
  def metaProperty= this.metaClass.getMetaProperty(name)
    //'getMetaProperty' gets property, which may be an added or an existing one
  metaProperty? metaProperty.getProperty(delegate): "" + delegate
}

Number.metaClass.getHours = { -> new Integer(delegate          )
}

Number.metaClass.getPill = { -> new Integer(delegate          )
}

metaClass.invokeMethod= {name, args->
            def metaMethod= this.metaClass.getMetaMethod(name, args)
            //'getMetaMethod' gets method, which may be an added or an existing one
            // Call out to missing method can be handled here
            metaMethod? metaMethod.invoke(delegate,args): name
}

can be abstracted out in the domain model and hence DSL will simply looks like


take 1.pill,
     of:disprin,
     after:6.hours