Wednesday, December 30, 2020

Integrating multiple data sources to OData API with Spring Boot and Teiid

 

The example showing how to integrate data from H2 database and CSV file and then exposed it as OData API with Spring Boot and Teiid. 

Teiid is a data virtualization tool created by Red Hat for data integration purpose and it has the capability of expose the integrated data as OData API. Besides H2 database and CSV file, Teiid support wide range of data sources such as Oracle DB, MS SQL, MySQL, MongoDB, REST API, Excel file, JSON, Google Spreadsheet, OData API, etc. you can find out more from the official samples

The project demostrated how to use Teiid Spring Boot runtime on joining product_symbols table of H2 database and product_data CSV file with productId column and then created a product view. You can see from the diagram above that name and type column of product view is come from product_data CSV file and the rest come from product_symbols table.

The product view then exposed as OData API that can be accessed by clients such as Excel, PowerBI, Java application, .Net application, etc.

You can find out more by take a look on the source code and steps to execute the example locally in your own machine at https://github.com/limcheekin/teiid-spring-boot-demo

Happy new year 2021! I'd love to hear your comments :).

Thursday, December 17, 2020

Implementing GraphQL API with Micronaut and AWS Lambda

First of all, the GraphQL API created using the following technology stack:



As per my own convention, I would like to explore new technology by implementing address book application with the following database schema:

The application is currently running live in AWS lambda with MySQL database. You can test it out by sending HTTP POST request with the following GraphQL query in JSON to https://39nx6hi1ac.execute-api.ap-southeast-1.amazonaws.com/Prod/graphql:

{
"query":"{
findById(id: 27) {
lastName firstName gender age
}
}"
}

Please make sure the JSON above is send in one line, I formatted it here for ease of read. If you are not sure what tool to use, I'm using Postman.

You will get the following JSON response:

{
"data": {
"findById": {
"lastName": "Lim",
"firstName": "Chee Kin",
"gender": "MALE",
"age": 40,
}
}
}

Next, you can further extend the query to include fields of master-detail data:
{
"query":"{
findById(id: 27) {
lastName firstName gender age
contacts {id type value}
}
}"
}

You will get the following JSON response:

{
"data": {
"findById": {
"lastName": "Lim",
"firstName": "Chee Kin",
"gender": "MALE",
"age": 40,
"contacts": [
{
"id": 28,
"type": "MOBILE_NUMBER",
"value": "+60123456789"
},
{
"id": 29,
"type": "WORK_NUMBER",
"value": "+604567890"
},
{
"id": 30,
"type": "EMAIL_ADDRESS",
"value": "test@test.com"
}
]
}
}
}

The source code of the application is hosted in the Github repository at https://github.com/limcheekin/micronaut-person/tree/graal-data-jpa-2.0.x


It is continuous build to native app using GraalVM in Docker container and deploy by Github Actions to AWS Lambda custom runtime.

The blog post only scratch the surface of the GraphQL API, kindly let me know if you are interested to find out more on how the API is created, how the continuous deployment with Github Actions works, how to create the AWS API Gateway and AWS Lambda custom runtime, etc. I will write a dedicated blog post for each topic.

I'd love to hear from you! :)

Updated on Dec 23: If GraalVM is new to you and you're curious on the reasons I go for GraalVM, please check out the well-written article at DZone. 

Wednesday, December 2, 2020

Standard query language for your Web API, why GraphQL and OData?

GraphQL

GraphQL created by Facebook in 2012 and released publicly in 2015. I get the following definition of GraphQL from Wikipedia:
GraphQL is an open-source data query and manipulation language for APIs, and a runtime for fulfilling queries with existing data.
In my opinion, the killer feature of GraphQL is in the data query side instead of data manipulation, as REST API have good enough capability of data manipulation. The following is the key reasons I adopted GraphQL in my Web API:
  • Client applications of limited network bandwidth and latency is critical such as mobile application get to choose what fields of object graph (a hierarchical structure of related objects) need to be included in the response.
  • Client applications can inspect the schema of GraphQL API via its introspection capability. Development tools of strongly-type programming language such as Java, Dart, Go, etc. can generate client code via GraphQL introspection, it will greatly improve the productivity of developers.
Updated on Dec 17: You can find out more from a GraphQL API implemented by me in the following blog post: 

OData

OData created by Microsoft in 2007. I get the following definition of OData from Wikipedia:
Open Data Protocol (OData) is an open protocol that allows the creation and consumption of queryable and interoperable REST APIs in a simple and standard way.
In a blog post published by Progress, it has better definition of OData:
OData advocates a standard way of implementing REST APIs that allows for SQL-like querying capabilities using these RESTful APIs. OData is essentially SQL for the web built on top of standard protocols – HTTP, JSON & ATOM XML – while leveraging the REST architecture style.
In my opinion, benefits of OData is similar to GraphQL with the following differences:
  • OData supports schema discovery by query the metadata directly with $metadata URL without introspection.
  • OData supports ATOM XML and JSON response.
  • OData has been adopted by a lot of technologies and companies including SAP, IBM, Salesforce, Tableau, Databoom, Progress, Red Hat and Dell especially in spreadsheet, analytics, reporting, dashboard and business intelligence applications.
Updated on Dec 30: You can find out more from a OData API implemented by me in the following blog post: 

What do you think? I'd love to hear from you. :)

Friday, November 20, 2020

The end is just a new beginning

It had been long time I didn’t post anything new to my blog, oh yeah it is since I joined The Hewlett-Packard Company (HP) in April, 2013. That’s more than 7.5 years ago, time flies. As I retrenched by DXC Technology end of last month due to re-org and join the unemployment statistics, I think it is good time to review what I did in the past 7.5 years and think about where am I heading next.

In 2013, as I’m handful of developers in Malaysia posses skill in Grails Framework, HP hired me to lead the development of a project known as EPIC Configurator, a configuration and management console of an integrated invoice processing system. The development team composed of architect, developers and UI designer from Malaysia and United States. We are using the following technology stack for source code management, development and continuous integration and deployment (CI/CD):


In the EPIC Configurator team, I play the role of tech lead captures functional requirements and technical requirements from business owner and architect, assigns tasks to team members and support them to deliver the works. Developers in the team works as full-stack developer, they develop the features from front-end UIs to back-end APIs and database.
There are two oversea trips for this project:
  • In January 2014, three of us from Malaysia EPIC team headed to HP office located in Texas, US for one week project meetings.
  • In November 2014, an US colleague and I headed to Bangalore, India to conduct 2-weeks knowledge transfer sessions to India team.

In 2015, the management announced HP will be split to two independent public listed companies known as HP Enterprise (HPE) and HP, Inc (HPQ). I become a HPE employee after the split.

Before HP split, I join the China team remotely to develop a project known Divestiture and Acquisition (DnA). It is a dashboard portal to support the management teams to manage the HP split and Micro Focus acquisition. We are using the following technology stack for source code management, development and deployment:

In the DnA team, I play the role of senior Java developer, mostly work on back-end APIs using Jersey, SpringFramework and MongoDB.

At the year end of 2016, HP Enterprise management spun off Enterprise Services business and merged it with Computer Sciences Corporation (CSC) founded the DXC Technology in April 2017. I become a DXC employee after the merger.

The born of DXC Technology is the start of Agile Process Automation (APA) project, a microservices architecture for integrated process discovery, process analytic and robotic process automation such as Blueprism, UiPath and python script. We are using the following technology stack for source code management, development and continuous integration and deployment (CI/CD):

As you can see from the diagram above, APA is big undertaking, it is a huge project adopted many emerging technologies. The engineering team has around 40 members from US, China, Europe and Malaysia composed of UI/UX designers, business analysts, architects, tech leads, developers, testers, scrum masters, project managers, infrastructure engineer, DevOps engineer, etc.  

In the APA team, I play the role of tech lead on leading the team to develop the Analytic Console of APA which empowered the user to do data mapping, define their own tabular data and dashboard without the involvement of developers. I also work on back-end APIs and OData APIs using Jersey, Spring Boot and MongoDB which deployed to Amazon Web Services and Microsoft Azure as Docker container.

I also worked with Europe team remotely on prototyping an intelligent voice agent of conversation AI domain using the following technology stacks:

From the writings above, you can see that I'm mostly focus on back-end technologies in the recent years. Going forward, I intend to close the loop by picking up a front-end technology, I chose Flutter. You should see me write more about Flutter in the blog.

Lastly, I'm kind of interested in online marketing skills especially those relevant to subscription business.

Stay safe, stay home and stay healthy! :)