Building API with DataTrucker.IO
#
DataTrucker.IODataTrucker.IO is a simple no-code / less-code API Backend completely free and licensed under apache v2.
Datatrucker.IO is a product capable of reading simple json/yaml configs and building the code necessary for converting it into an API. Along with building the code, it will also host the code base on a nodejs server , i.e. it will immediately make it available for consumption.
DataTrucker is capable of removing the most common activities which a developer needs to do on every new project A few of such common activities are
- Creating an API endpoint with a specified business logic (using simple plugins)
- Applying standard RBAC
- Applying Authorization logic
- Applying hardening on endpoints
- Log management
- Connecting to variety of systems
- Modularizing business logic
- The best of doing it with little to no code
#
Let get startedToday in this article we will go through installation of datatrucker on openshift and building the first API for a postgres database. The process is similar in kubernetes environment .
#
Step 1: Create a Namespace called truckeroc new-project trucker
#
Step 2: Downloading and Install the ApplicationDataTrucker.IO is available in operator hub and can be added to your cluster as an operator
#
Step 3: Navigate into the operators- Click on the Installed Operators and open the operator "DataTrucker.IO"
#
Step 4: Create a DataTrucker Config by running the the yaml objectCreate a pvc for a Database backend. Note: The A postgres DB provided using crunchydata containers is for getting started, for production workload we would recommend a hardened geo redundant DB
Create a pvc called samplepvc
Create an instance of DatatruckerConfig Object
Before you click create ensure TempDB.enabled is true in the DatatruckerConfig Object . this is required for proto typing the below demo
Sample is available here: GITLAB
oc apply -f DatatruckerConfig.yaml
Lets understand what a Kind: DatatruckerConfig
is
The Config Object creates the following
#
A postgresDB backendwe provide a temporary non-hardended DB from crunchydata and can be created by enabling the following in the Datatrucker Config . For Production workload, we would recommend an hardened Geo redundant database
TempDB: enabled: true pvc: samplepvc
#
A DB Configuration to use as backendIn production systems, you would use a geo redundant postgres database
user: testuser password: password databasename: userdb hostname: db type: pg port: 5432
#
Crypto Configuration to use as backend API: cryptokeys: |- ....
Detailed information here
#
API server Backends Configuration to use as backend API: name: API loginServer: |- .... managementServer: |- .... jobsServer: |- ....
#
Step 5: Create a Login and management End points#
LoginThis creates and endpoint for obtaining login token
apiVersion: datatrucker.datatrucker.io/v1kind: DatatruckerFlowmetadata: name: login-endpointspec: Type: Login DatatruckerConfig: < the name of the config object created in step 4 >
#
Managemnent EndpointThis creates and endpoint for for RBAC management and Credentials creation
apiVersion: datatrucker.datatrucker.io/v1kind: DatatruckerFlowmetadata: name: management-endpointspec: Type: Management DatatruckerConfig: < the name of the config object created in step 4 >
Note: this will create the deployments and service endpoints for both the UI and Management API
#
Step 6: Expose the management endpointExpose the routes
$ oc get svc | grep endpointlogin-endpoint ClusterIP 10.217.5.89 <none> 80/TCP 3m43smanagement-endpoint ClusterIP 10.217.5.220 <none> 80/TCP 3m29smanagement-endpoint-ui ClusterIP 10.217.4.42 <none> 80/TCP 3m28s
$ oc expose svc management-endpoint-uiroute.route.openshift.io/management-endpoint-ui exposed
$ oc expose svc login-endpointroute.route.openshift.io/login-endpoint exposed
$ oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARDlogin-endpoint login-endpoint-trucker.apps-crc.testing login-endpoint 8080 Nonemanagement-endpoint-ui management-endpoint-ui-trucker.apps-crc.testing management-endpoint-ui 9080 None
#
Step 7: Login to the UI via a browserCreate an Admin User
Login
#
Step 8: Lets create a Postgres Credential for the APITill now we were installing, lets switch to building APIs
create a Postgres credentials to the database of your choice
- Expand the Left navigation bar
- Select Credentials.
- Open Postgres Credentials Pane.
- Click on Create Credentials
- Enter your DBs details
#
Step 9: Lets create a Postgres APICreate a Flow object with below job spec
The below spec creates the following
- A new micrservice to host the API
- the microservice will have 2 APIs on its route i.e
- postgres1
- get current date and user sent parameterinto the SQL
- is a post request
- input sanitization for the userinput variable "userinput"
- postgres2
- gets list of table available
- is a get request
- postgres1
---apiVersion: datatrucker.datatrucker.io/v1kind: DatatruckerFlowmetadata: name: my-first-apispec: DatatruckerConfig: datatruckerconfig-sample JobDefinitions: - credentialname: db < cred name from step 8 > job_timeout: 600 name: postgres1 restmethod: POST script: 'select ''[[userinput]]'' as userinput; ' < query you want to execute> tenant: Admin type: DB-Postgres validations: properties: userinput: maxLength: 18 pattern: '^[a-z0-9]*$' type: string type: object - credentialname: db < cred name from step 8 > job_timeout: 600 name: postgres2 restmethod: GET script: select * from information_schema.tables < query you want to execute> tenant: Admin type: DB-Postgres Type: Job
Now search for the service
$. oc get svc | grep my-first-api my-first-api ClusterIP 10.217.5.116 <none> 80/TCP 45s
$. oc expose svc my-first-apiroute.route.openshift.io/my-first-api exposed
$. oc get routes | grep my-first-apimy-first-api my-first-api-trucker.apps-crc.testing my-first-api 8080 None
Now you have a URL lets go test it out
The URL will bel
http://<your api route>/api/v1/jobs/<name of the JobDefinitions defined in the yaml>
In the above example 2 Job Definitions were created
- postgres1 of type POST
- postgres2 of type GET
#
Step 10: Test out your APIsGet a Login token from the login endpoint
curl --location --request POST 'http://login-endpoint-trucker.<wilcard.domain>/api/v1/login' \--header 'Content-Type: application/json' \--data-raw '{ "username": "xxx", "password": "xxxxxxxx", "tenant": "Admin"}'
Response:{ "status": true, "username": "xxx", "token": "xxxxxxxxxxxx"}
Now use the login token against your APIs
#
The first onecurl --location --request POST 'http://my-first-api-trucker.<wilcard.domain>/api/v1/jobs/postgres1' \--header 'Authorization: Bearer xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx \--header 'Content-Type: application/json' \--data-raw '{ "userinput": "myfirstresponse"}'
Response:{ "reqCompleted": true, "date": "2021-09-05T22:05:58.064Z", "reqID": "req-3w", "data": { "command": "SELECT", "rowCount": 1, "oid": null, "rows": [ .............
#
The second onecurl --location --request GET 'http://my-first-api-trucker.<wilcard.domain>/api/v1/jobs/postgres2' \--header 'Authorization: Bearer xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
Response:{ "reqCompleted": true, "date": "2021-09-05T22:03:58.389Z", "reqID": "req-35", "data": { "command": "SELECT", "rowCount": 185, "oid": null, "rows": [ { " .......