The secret to completing anything

Let me start by appologizing for the slightly ‘click bait’ style heading that brought you here. As a reward I will quickly sum it up into one word…

PLANNING

Now I am no expert on the subject, but just like everyone else. I have my fair share of things which I have seen to completion while others I have not. I have managed to move my entire life across the globe, twice. I have succesfully swam the midmar mile. I have seen various software development projects through to completion. I never thought I would have made it to the 20th blog post, but here I am. However also have yet to complete my own personal application that I thought would take me a weekend and is now pushing on close to 4 years in the making, even though I technicaly have the skills to do it. So what gives?

In retrospect, the common denominator to the majority of my succesful completions is some degree of planning.

There is this one type of person, the go getter, that has this ability to just power through any project with sheer willpower. This is great for some tasks like cleaning your house, organizing your documents, etc. My wife is a really great example of this and it makes me quite envious sometimes. I am lucky if I am able to wash all the dishes, but often will make it about 75% of the way before I lose my motivation with the rest of the dishes “soaking”.

Have a plan of sorts

Over the years in my efforts to solve the issue of never completing anything I start working on, the first step was trying to plan my approach. This differs from project to project depending on the scope and how complex it can be. A popular approach is some sort of ‘Kanban‘ board to capture milestones and features of the project. This works for both a one person problem or even multiple team members. There are multiple ways or arranging the board from using yellow post-it notes on a free wall space to more technology based solutions such as Asana or Wrike.

The key goal here is to start by brainstorming the problem and isolating the major points that need to be tackled. In software development this could be figuring out a set of features that need to be designed, understanding dependancy trees.

Understand your problem in detail

Planning overlaps with a skill around setting goals where you fully articulate how to achieve your goal almost as if you are writing about it in the past as if you already achieved this. To borrow my dishes example, there is a difference over:

“I will wash all the dishes”

compared with:

“I will wash all the dishes. First I will rinse the dishes starting with the least soiled ones and arranging them into different stacks based on their usage type. Once rinsed, start by washing all cutlery, rinsing and putting away. Move onto the cups and glasses, followed by plates and bowls, finishing up with pots and pans.”

The idea here is to break down the problem into smaller easier to achieve goals. The science behind this is the feeling of acomplishment. This has a psychological impact that helps one stay motivated on the task at hand. This is specially important for larger projects that take multiple days and with no notable measure of progress will lead to demotivation which kills any project completion in its tracks.

By fully understanding what is needed to solve the problem, the easier it will be to plan out how, and in what order things need to be done to succesfully complete the project. It is for that reason my current web app project is taking so long, because I have not planned it at all.

Fix your environment

Never understimate how your environment can either positively or negatively impact your producitivity. It is very easy to validate the need to step away from working on something when it becomes it challenging to, randomnly clean your desk or finally sort out all the documents you have been putting off for so long. See step one and create a plan where those tasks are allocated their own time.

Try and keep your work area free of clutter which is guaranteed to add distraction. Create a space that is calming for you, some people like a completely quiet space while others prefer to have some music (relaxing or even hardcore rock) to help them zone out. Some incense or aroma therapy can help.

Sometimes this is not always possible to create the perfect environment. At the time of writing this, my one year old cat (Misty) is craving my attention and running around like it ate two enegizer bunnies. While it is distracting, I have taken certain steps to mitigate as best I can. In a work environment, you will face the same challenge where you will be exposed to constant interuptions. You might not be able to blast your music but have some discussions with your boss regarding some concessions on how to improve the work environment that both parties will be happy with.

Conclusion

Nothing I have offered is ground breaking and you might feel like I have shared what you already know. I still have not planned my web app project, chosing instead to write this post in an effort to subtract one of my distractions while motivating me to finally plan the thing. I have alrady been 10 days into my annual vacation and barely made any progress unless you call sleeping in for half the day “progress”. So I wil sum up my mindset by paraphrasing a quote attributed to Thomas Eddison.

“I didn’t fail 1,000 times. I just found 1,000 ways that did not work”

The art of abstraction.

Imagine for a moment that you have your own business and as the only employee you are responsible for everything. This involves the sourcing of raw or input value into the business. Processing of the input in order to generate some kind of value, marketing, etc… I think you get the point. Imagine the business takes off resulting in more customers. In order for the business to continue, it needs to scale to the demand. At some point it will become impossible to manage solely and you will be forced to delegate responsibility.

This is where the “Art” comes in to play because it really is something of an interpretation. There exists countless books that cover many different management styles with the hopes of maximising the efficiency. Techniques which span the full spectrum from basically nothing to obsessive micro management.

The software development space is no different, with no single correct answer either. Over time as the team expands it will ultimately evolve, tweaking the development process.

Part of that evolution is having to maintain existing solutions. You can either keep patching it up but let’s say that solution was based on a stack from 1960 … Eventually you will reach a ceiling where it becomes impossible. Lack of hardware support, lack of software support, and the lack of people who are skilled in that area. The other option is to rebuild using newer technologies and skills, with some degree of future proofing, but even that will eventually become outdated.

Thus one needs to abstract responsibility of the design. Just like in the business example, your sourcing and invoicing can be completely isolated. They do not need to understand how either one functions. The sourcing department simply informs the invoicing department of a new invoice from a supplier, requesting it be paid. They don’t need to know how it gets paid, only that it is paid so they can keep going to the supplier.

In software development this is referred to as a “Black box”. It is a term used for testing code, internal solution design, but also expands beyond to the platform and architecture design. When designing a large scale platform, if you had to worry about the lower level code design, you would never be able to finish planning. Even if you you did finish, it would take so long that the landscape has changed, forcing you go back and plan everything again.

Personally my goal is to isolate as much functionality where possible, breaking the code design up into the smallest blocks of logic possible. This sounds counter productive, but will add flexibility to swap out pieces of the solution bit by bit. Let’s say you want to take a monolithic application and convert it into something that is more micro service based? Start by switching over one or two small modules into a micro service with a service layer to interact with the new micro service. At some point your monolith is reduced to a collection of service layers that interact with different micro services. At which point you can design a new application that targets a new technology without having to redesign a substantial portion of the code base.

Obviously that sounds a lot easier than in practice because you might have to redesign whole layers due to scaling issues, and how problem space will the influence choices which can impose restrictions on the solution. Hopefully through some trial and error you will make it through the gauntlet ready to face the next battle of evolving code.

My Web App Journey – OpenFaaS

I feel like this is more of a step backwards. I mean in the last post I was starting to build some interfaces and about to start with the persistance layer.

I purchased the Khadas Vim3L in the hopes of building an htpc for my wife but sadly it did not work with the streaming media sites due to DRM issues. The device is bascially a nice powerful Raspberry Pi with an added NPU which could be used for some machine learning down the line. So i decided to repurpose the device in order to try my hand at using OpenFaaS, which is basically a technology that provides AWS locally.

So this post will be about the rocky road i took to get it running which took me a little longer than 15 minutes.

Who reads guides anyway!

So I have been wanting to try FaaS (Functions as a Service) which I understand as the next evolution of to micro services. I found some guides and watched some youtube videos showing how easy it was to deploy. I thought how hard could it be!

So I started by flashing the firmware to install Debian Minimal and half way into the process I hit a brick wall due to differences in how the NAT translation in the newer Linux Kernel 5.9 compared to say Ubuntu 20.04 which is still on kernel 4.19. The problem comes from the transition of the normal IPTables to instead use NFTables.

In the hopes of a short solution I tweeted @alexellisuk who suggested I instead use Ubuntu 20. Alex (the developer of OpenFaaS) recently posted a newer guide suggesting only 15 minutes to get it running using a lightweight kubernetes (k3s).

Installing Kubernetes

Now because I was not using a Raspberry Pi, i knew some steps would not apply from the guide that Alex posted. Alex has created some great tools that streamline the process. So long as you run them on the right system.

Because I did not follow his guide word for word, I missed the part that suggests you dont drive the installation from the target but instead from your Laptop/Desktop instead (which I later gleamed from one of his older videos posted on youtube).

I installed WSL2 running the Ubunut 20 image on my windows desktop to make it a little easier after my previous failures. You see i tried to putty into the target server and install the tools onto it like that. This did not bode well because I did not know what i was doing and was getting stuck on simple steps like trying to ‘ssh-copy-id’.

So the first step is to establish a pre-authenticated session between my workstation and the target server. You need to create an ssh key and this is done on your workstation (not the server).

ssh-keygen

Dont specify a passphrase which makes future sign-in simple. You can now copy your public ssh key from your workstation to the server.

ssh-copy-id -i .ssh/id_rsa.pub root@192.168.1.136

Now the above command takes the public ssh key from your local workstation and connects over ssh to the server (192.168.1.136 in my case) and uses the root account to connect. You will provide the password and I can only imagine you could do this with any other account but i am not sure the limitations (like needing sudo ?). Anyhow once that is done you should be able to ssh from your workstation to the server without having to input the password.

ssh root@192.168.1.136

It works by adding your public key into the ‘~/.ssh/authorized_keys’ of the root account (or which ever account you chose) which makes this work.

Now that our workstation is primed to easily connect to the server, we need the tooling that helps remotely deploy k3s to the server.

curl -sSL https://dl.get-arkade.dev | sudo sh

Alex created a lovely tool called ‘arkade’ which helps in getting both tools and deploying a selection of pods to the k3s cluster.

arkade get k3sup

This will download ‘k3sup’ (pronounced “ketchup”) which is a tool to help you remotely install k3s.

export IP=192.168.1.136

This exports (sets an environment variable) the target IP address where you will install ‘k3s’ on.

k3sup install --ip $IP --user root

This will then install ‘k3s’ on the target server and you specify the user to log on with using SSH to remotely install on the target server. I used ‘root’ but you could also use ‘pi’ if you were installing on a raspberry pi.

During the installation process it will then copy a ‘kubeconfig’ file over to your workstation in which ever folder you ran the install from. I would suggest that you do this from your home directory.

arkade get kubectl

This downloads the tool used to interact with ‘k3s’ to get status or control the cluster as needed. But you will need to make sure you properly configured your environment to point to your kubeconfig file.

export KUBECONFIG=/home/jt/kubeconfig

In my case it originally saved to my ‘~/.ssh’ folder but you can move it and the above is the environment variable needed to make ‘kubectl’ connect.

kubectl get node -o wide
kubectl top node
kubectl top pod --all-namespaces

Some commands to get you started with discovering your freshly installed ‘k3s’ cluster and checking if the nodes are up and running.

Your first pod

So the first pod or application you can try install before openfaas, as in the guide, is the ‘kubernetes-dashboard’.

arkade install kubernetes-dashboard

From your workstation this command will deploy the dashboard to your server. Once it’s completed you actually need to create an admin user first before you can access the dashboard.

cat <<EOF | kubectl apply -f -
 apiVersion: v1
 kind: ServiceAccount
 metadata:
   name: admin-user
   namespace: kubernetes-dashboard
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRoleBinding
 metadata:
   name: admin-user
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: ClusterRole
   name: cluster-admin
 subjects:
 kind: ServiceAccount
 name: admin-user 
   namespace: kubernetes-dashboard
 EOF

The above script you just paste direct into your bash window and run which will create the user and role.

kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user-token | awk '{print $1}')

This command will print the token created for this admin account that you will need when you try to log into the dashboard.

kubectl proxy

In order to access the dashboard locally on your workstation hosted on the server, you can proxy it from the server to your local workstation with the above command.

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login

This is the link to use in order to access the dashboard, it will ask you to input the token. Once you paste the token, you can then browse the dashboard and get information about the cluster.

Broken Config

So one problem I faced was a bad kubeconfig or something possibly from one of the previous attempts to install ‘k3s’ both directly on the server and remotely (a couple times).

I tried draining the old nodes, deleting them, uninstalling k3s using the ‘k3s-uninstall’ script on the server. But eventually I found a nice nugget.

k3sup install --ip $IP --user root --local-path $HOME/.kube/config --merge --skip-install

Running this on your workstation will basically recopy the current kubeconfig from your server to your workstation to the ‘local-path’. You can also see I changed the location and name of the file.

 export KUBECONFIG=/home/jt/.kube/config

I had to update my environment to reference the new config file because i didnt like it sitting in my main home directory and this was something I saw Alex do in one of his other videos.

Reached the goal

Finally some luck and you should be able to finally deploy openfaas to your k3s cluster.

arkade install openfaas

Hopefully this installs without issue and is able to deploy to your cluster, I kept getting an error regarding the basic-auth not being configured correctly or already setup. I solved this by downloading the kubeconfig again but it could also have been from cleaning the server and all the old configs for ‘rancher’ that I could find.

arkade get faas-cli

Running this on your workstation installs the client needed to communicate with the installation of OpenFaaS. You may have noticed some output after installing openfaas that guides you on the next steps.

export PASSWORD=$(kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode; echo)
export OPENFAAS_URL=http://$IP:31112

This sets up the workstation environment with the password and endpoint of the openfaas service. In order to use the ‘faas-cli’, you need to login first.

echo -n $PASSWORD | faas-cli login --username admin --password-stdin

This little gem will login and allow you then interact with openfaas using the cli.

faas-cli store list --platform armhf
faas-cli store deploy figlet --platform armhf
faas-cli list

The following commands on the workstation shows how to list the functions available for deployment in the store. Deploying a simple function ‘figlet’ to your openfaas instance and finally a list of the functions currently deployed to your openfaas instance.

There is also a web interface you can use to interact with your deployed functions.

echo $OPENFAAS_URL
echo $PASSWORD

This will display the details on your workstation that you will need to access the dashboard with the username being ‘admin’.

The only thing you will need to do now is configure your environment to be more permanent so next time you open your bash they will be set for you.

Reference material

https://alexellisuk.medium.com/walk-through-install-kubernetes-to-your-raspberry-pi-in-15-minutes-84a8492dc95a

My Web App Journey – Data Source (Take 2)

I did the equivalent of an oil change in the previous post to upgrade my project to use the latest versions of the different packages to minimize the technical debt having halted the development for about a year.

I was in the middle of changing the data source which has left the project broken because the both the original repository that leveraged an in-memory array and the new repository that works with the MongoDB server no longer match the changes I made to the interface.

Typically you should create a new version of your interface to avoid breaking changes, because the result is I now need to update not just both repositories but also the controller to leverage the new design. This is the cost of trying to shift all the control from the repository directly into the controller.

While reviewing the interface design which is now less generic making the coupling between the controller and the data nice and loose, it suffers from being a single use.

Building the chain

So by “single use”, I am referring to the ability to use a function from the repository to return a list of items as limited to a single step. A better way to think of this would be wanting to get from the database a list of items created on a specific date which are categorized as food expenses.

Currently my interface is such that you would need to choose the primary filter to the items but then you would need to perform additional filtering inside the controller. This feels like additional work to me. What would be nice is to create something that is more like a LINQ style function found in C#. I don’t know if its possible and I might end up simulating the behavior described above but in the repository itself.

I started by trying to explore the mongodb client to understand its capabilities but found I lacked the types. So first I need to install those.

npm install @types/mongodb

Going back into vs code, I leveraged the linting to explore the functions used on mongo client as to the parameters and return type. Thinking about how LINQ is achieved inside the C# language might help, instead of the “Find” methods return the list of objects they would need to return some sort of Type that feeds into other “Find” methods with a final method to process and “Collect” the items from the database.

The challenge is to design it in such a way that is repository agnostic to avoid bias towards a particular database. The solution I plan to settle on will be a new interface that specifies a bunch of different filter methods. Each filter method returns the very same interface to allow chaining methods. A final “Collect” method being the bookend which then returns a collection of items.

/src/models/transaction.ts

export class Transaction {
  id: string;
  amount: number;
  currency: string;
  date: Date;
  description: string;
  source: string;
  type: string;
  category?: string;
}

export interface TransactionRepositoryFilter {

  /** find transactions in a category. supports regular expression matching.
  * @returns instance of a TransactionRepositoryFilter
  */
  filterByCategory(category: string): TransactionRepositoryFilter;

  /** find transactions based on the description. supports regular expression matching.
  * @returns instance of a TransactionRepositoryFilter
  */
  filterByDescription(description: string): TransactionRepositoryFilter;

  /** find transactions based on the source. supports regular expression matching.
   * @returns instance of a TransactionRepositoryFilter
  */
  filterByBySource(source: string): TransactionRepositoryFilter;

  /** find transactions based on the type. supports regular expression matching.
  * @returns instance of a TransactionRepositoryFilter
  */
  filterByType(type: string): TransactionRepositoryFilter;

  /** find transactions between two given amounts (inclusive).
  * @returns instance of a TransactionRepositoryFilter
  */
  filterByAmountBetween(lower: number, upper: number): TransactionRepositoryFilter;

  /** find transactions above a given amount (exclusive).
  * @returns instance of a TransactionRepositoryFilter
  */
  filterByAmountAbove(amount: number): TransactionRepositoryFilter;

  /** find transactions below a given amount (exclusive).
  * @returns instance of a TransactionRepositoryFilter
  */
  filterByAmountBelow(amount: number): TransactionRepositoryFilter;

  /** find transactions at a given amount (inclusive).
  * @returns instance of a TransactionRepositoryFilter
  */
  filterByAmount(amount: number): TransactionRepositoryFilter;

  /** find transactions between two given dates (inclusive).
  * @returns instance of a TransactionRepositoryFilter
  */
  filterByDateBetween(lower: Date, upper: Date): TransactionRepositoryFilter;

  /** find transactions after a given date (exclusive).
  * @returns instance of a TransactionRepositoryFilter
  */
  filterByDateAfter(date: Date): TransactionRepositoryFilter;

  /** find transactions before a given date (exclusive).
  * @returns instance of a TransactionRepositoryFilter
  */
  filterByDateBefore(date: Date): TransactionRepositoryFilter;

  /** find transactions on a given amounts.
  * @returns instance of a TransactionRepositoryFilter
  */
  filterByDate(date: Date): TransactionRepositoryFilter;

  /** collect all Transactions based on the previous filters
   * @returns array of transactions. null if no records found
   */
  collect(): Array<Transaction>;

}

export interface TransactionRepository extends TransactionRepositoryFilter {

  /** add a new transaction to the repository.
  * @returns transaction. null if it failed to add
  */
  add(record: Transaction): Transaction;

  /** updates a transaction in the repository.
  * @returns transaction. null if unsuccessful 
  */
  update(record: Transaction): Transaction;

  /** get all the transactions in the repository.
  * @returns array of transaction. null if no records found
  */
  all(): Array<Transaction>;

  /** finds the first matching Transaction in the repository based on its ID. supports regular expression matching.
  * @returns transaction. null if not found
  */
  getById(id: string): Transaction;

  /** removes a record from the repository.
  * @returns transaction. null if unsuccessful */
  removeById(id: string): Transaction;

}

I think that should do it, how exactly I manage to make it actually happen will be down to the implementation of each repository.

Housekeeping

With the new tweaks to the interface, need to update the controller to take advantage. This means removing the expression logic from the controller to focus more on translating the incoming request, calling the relevant repository method and generating a response.

/src/controllers/transaction.ts

import { Request, Response, Router } from 'express';
import { Transaction, TransactionRepository } from '../models/transaction';
import { REPOSITORY_TYPES } from '../types';
import { inject, injectable } from "inversify";

@injectable()
export class TransactionController {

  private repository: TransactionRepository;

  private router: Router = Router();

  public constructor(@inject(REPOSITORY_TYPES.Transaction) repository: TransactionRepository) {

    this.repository = repository;

    this.router.get('/:id', (req: Request, res: Response) => {

      let id = req.params.id;
      let transaction = this.repository.getById(id);

      if (transaction == null)
        res.status(404).send();  // Record not found
      else
        res.status(200).send(transaction);
    });

    this.router.post('/', (req: Request, res: Response) => {

      let transaction: Transaction = {
        id: "0",
        type: req.body.type,
        date: new Date(req.body.date),
        currency: req.body.currency,
        amount: req.body.amount,
        source: req.body.source,
        description: req.body.description
      };

      transaction = this.repository.add(transaction);

      if (transaction == null)
        res.status(404).send();  // Record not added
      else
        res.status(200).send(transaction);

    });

    this.router.put('/:id', (req: Request, res: Response) => {

      let transaction = this.repository.update({
        id: req.params.id,
        amount: req.body.amount,
        currency: req.body.currency,
        date: req.body.date,
        description: req.body.description,
        source: req.body.source,
        type: req.body.type,
        category: req.body.category
      });

      if (transaction == null)
        res.status(404).send();  // Record not found
      else
        res.status(200).send(transaction);

    });

    this.router.delete('/:id', (req: Request, res: Response) => {

      let id = req.params.id;
      let transaction = this.repository.removeById(id);

      if ( transaction == null)
        res.status(404).send();  // Record not found
      else
        res.status(200).send('Transaction deleted');

    });
  }

  public getRouter(): Router {
    return this.router;
  }

}

I updated the repository based on the array and fired up the project to make sure it was still functional following all the changes I had made. I found a bug with the update and quickly touched that up.

/src/repositories/transactionArray.ts

import { injectable } from 'inversify';

import { Transaction, TransactionRepository } from '../models/transaction';

@injectable()
export class TransactionArrayRepository implements TransactionRepository {

  private next_id: number = 5;

  private transactions: Array<Transaction> = [
    { id: "1", type: 'DEBIT', date: new Date('2018-12-28'), currency: 'USD', amount: -10.00, source: 'DEBIT_CARD', description: 'Soup' },
    { id: "2", type: 'DEBIT', date: new Date('2018-12-28'), currency: 'USD', amount: -15.00, source: 'DEBIT_CARD', description: 'Dessert' },
    { id: "3", type: 'DEBIT', date: new Date('2018-12-28'), currency: 'USD', amount: -20.00, source: 'DEBIT_CARD', description: 'Drinks' },
    { id: "4", type: 'DEBIT', date: new Date('2018-12-28'), currency: 'USD', amount: -5.00, source: 'DEBIT_CARD', description: 'Tip' }
  ];

  public getById(id: string): Transaction {
    return this.transactions.find(x => x.id == id);
  }

  public all(): Array<Transaction> {
    return Object.assign([], this.transactions);
  }

  public add(record: Transaction): Transaction {
    let entry = Object.assign({}, record);
    entry.id = (++this.next_id).toString();
    this.transactions.push(entry);
    return entry;
  }

  public update(record: Transaction): Transaction {
    let entry = this.getById(record.id);
    if (entry == null) return null;
    Object.keys(record).forEach(prop => {
      if (record[prop]) {
        entry[prop] = record[prop];
      }
    });
    return entry;
  }

  public removeById(id: string): Transaction {
    let entry = this.getById(id);
    if (entry == null) return null;
    this.transactions = this.transactions.filter(x => x.id != id);
    return entry;
  }

  filterByCategory(category: string): import("../models/transaction").TransactionRepositoryFilter {
    throw new Error("Method not implemented.");
  }
  filterByDescription(description: string): import("../models/transaction").TransactionRepositoryFilter {
    throw new Error("Method not implemented.");
  }
  filterByBySource(source: string): import("../models/transaction").TransactionRepositoryFilter {
    throw new Error("Method not implemented.");
  }
  filterByType(type: string): import("../models/transaction").TransactionRepositoryFilter {
    throw new Error("Method not implemented.");
  }
  filterByAmountBetween(lower: number, upper: number): import("../models/transaction").TransactionRepositoryFilter {
    throw new Error("Method not implemented.");
  }
  filterByAmountAbove(amount: number): import("../models/transaction").TransactionRepositoryFilter {
    throw new Error("Method not implemented.");
  }
  filterByAmountBelow(amount: number): import("../models/transaction").TransactionRepositoryFilter {
    throw new Error("Method not implemented.");
  }
  filterByAmount(amount: number): import("../models/transaction").TransactionRepositoryFilter {
    throw new Error("Method not implemented.");
  }
  filterByDateBetween(lower: Date, upper: Date): import("../models/transaction").TransactionRepositoryFilter {
    throw new Error("Method not implemented.");
  }
  filterByDateAfter(date: Date): import("../models/transaction").TransactionRepositoryFilter {
    throw new Error("Method not implemented.");
  }
  filterByDateBefore(date: Date): import("../models/transaction").TransactionRepositoryFilter {
    throw new Error("Method not implemented.");
  }
  filterByDate(date: Date): import("../models/transaction").TransactionRepositoryFilter {
    throw new Error("Method not implemented.");
  }
  collect(): Transaction[] {
    throw new Error("Method not implemented.");
  }

}

Going Mongo

Time to build the repository for interfacing with the Mongo database. Start by creating a new file in the repositories directory to export a class definition that is also injectable.

/src/repositories/transactionMongo.ts

import { inject, injectable } from 'inversify';
import { MongoClient, Collection, ObjectId } from 'mongodb';
import { Transaction, TransactionRepository } from '../models/transaction';
import { CONNECTIONS } from '../types';

@injectable()
export class TransactionArrayRepository implements TransactionRepository {
  public constructor(@inject(CONNECTIONS.MongoDB) client: MongoClient) {
  }
}

VS Code should complain because the interface needs to be implemented. I went ahead and let it add them for me and will implement each call one at a time

Thoughts – Seperation of Concerns

My current workday efforts revolve around a Java based solution. I have been working on refactoring the design that was edging towards something more monolithic in its design to break it up in smaller, easier to maintain packages.

One of the issues was how most of the logic was boxed into a single class. The problem was how the creating of the object relied on being provided another class object, with direct interaction to the private fields. Normally this is not a problem, but I was looking to decompose into multiple class objects. When you have a lot of business logic to work with, this can become time consuming. The goal being to migrate into a POJO (Plain Old Java Object) with no business logic on how to populate the fields, and to push the business logic into a Factory class which knows how to digest any input data in order to create the desired output.

I was looking into the pro’s and con’s of using the classic Java Getter/Setter method. Some good arguments was how sentatically speaking, the Getter/Setter pretty much does the same thing as declaring a public field which can be accessed easily. Consider the below two examples. They are both basically the same with Example A being a little easier to code and use. It will still work with basica Serialization/Deserialization methods … so why go to the effort of adding more layers of code on top?

Example A

public class Foo {
  public String bar;
}

Example B

public class Foo {
  private String bar
  public String getBar() { return this.bar; }
  public void setBar(String value) { this.bar = value; }
}

One benefit of Example B is the ability to limit access to the field. Like making the field read-only. If you choose to keep the business logic in the class, using the getters and setters to reference the internal fields. This has the benefit if you need to change some logic regarding a field such as making sure a string value matches a desired pattern. With Example A, you need to update the logic in each location to make sure the logic is universally applied which because challenging the bigger the design is. With example B, you simply need to update your logic in either the getter or setter giving a single point to manage the logic.

In terms of serialization/deserialization such as with Jackson (json) you have multiple options on hand. The easiest is to tag the class which can inspect the class to leverage any public fields or getters and setters. The getters ans setters add additional benefits to create unidirectional (de)serialization if needed. You can also tag the getters and setter independantly or specify custom serialization on a per field basis. The next option is write a custom serializer but that can be more invovled and worse is when you need to update the class to add new fields. Adding fields requires updating your custom serializers.

Conclusion

Seperation of concerns by spreading the code design around does mean more code and that can also take additional time. This might sound like something you wouldnt want to do if your under pressure. The benefit is making the solution more flexible to changes which sometimes requires redesigning half your solution. Seperating the different concerns of your solution by ensuring there is not a lot of tightly coupled components might take a little more time initially but the long term time savings are worth it. The other benefit is each component being easier to test which is crucial when you make changes to ensure you dont break something by mistake.

My Web App Journey – Taming the fire

In my previous post I was planning to get back into my project but then the whole world turned into flames. Work remained constant but spare time was soon occupied with games and again my ambitions receeded once more.

But like a phoneix I once more will rise from the ashes … reviving the ol Raspberry Pi and once more march on.

Baking the Pi

Originally I was planning on using Node.js to stand up the server and while getting that going was fun, at heart I am a C# developer. With the .NET Core 3.1 release the appeal of running .NET Core on my PI was too good to pass up.

I stumbled across a talk/demonstration by Pete Gallagher showing how to run .NET Core 3.1 on a Pi. This was the spark to rekindle my fire, followed the guide from his post which he made even simpler with a handy script.

A stretch of the imagination

With the script completed I checked that dotnet tool version

dotnet --info

Everything looked promising, I showed that .NET Core SDK 3.1.302 was installed along with a bunch of other stuff so in theory it should work now. I created a new directory aptly named ‘helloworld’ and made a new project.

dotnet new console

It worked! I now had a very basic project and all that was left was to fire it up.

dotnet run

Alas it was too good to be true. It did not work. I tried to build then run … no joy. It just said i needed to install the binary for the ARM processor. So i thought maybe it was because I was running Raspbian stretch (OS Version 9) instead of using buster (OS Version 10).

So I thought, lets quickly upgrade … how hard can it be? A quick google and nice easy to follow guide and I was on my way to vitctory…

Many hours later … ok it took a LOT longer than I was expecting but I finally upgraded to version 10 and tried again only to be sadly disappointed that it still refused to run. I shutdown my desktop and went to sleep because it was late into the night.

The next day after a good long night sleep I picked up where I left off. Could it be possible the script missed something? I recalled that Pete mentioned in the talk of the lengthy process to follow to install the SDK and get it running including a peice about setting the environment variables. The blog post was updated after that talk which had a nice script, so I scanned the post and found the details about updating your environment and voila I discovered the script does not include this.

So i quickly updated the “.bashrc” file to export the DOTNET_ROOT and tried to run my very basic project. A small tear and much happiness when I was finally greeted with:


Hello World!


Taming the Fire

Webassembly has gained popularity which offers amazing benefits for creating Web Based applications that are almost on par with native desktop applications. And microsoft has not ignored this, working on its own variant, Blazor.

Originally I was going to explore building my web app using Angular using typescript. Blazor is based on C# code working with Razor HTML formatting that behaves similar to Angular. When the code is published, some of the code is packaged into a library which is exposed through a javascript engine that acts like a bootloader of sorts.

This basically offers a similar experience to the C# MVC website but instead of running off the server, the application is running client side. Then using API calls to the backend server to facilitate your persistance and database needs. This could also be temporarily cached on the client, as in the case of a progressive web application.

Summary

So this time arround, this was more of a typical blog post and less of a technical posting. The project originally started as a goal of developing on a lightweight device (Windows Surface) to eventually host on a Raspberry Pi. It was somewhat of a parrallel to what I was doing at work.

I am now reverting to my current favourite platform (C#) and to explore how it can compare other technologies.

Future projects I would like to explore is building up my own Pi Cluster to explore using things like Kubernetes and OpenFaaS. I like the idea of writing very lean microservices and being able to deploy a slim function mapped to an endpoint instead of a whole API. This offeres a great flexibility where you dont need to re-deploy your entire API backend if changing/adding your API endpoints. FUN.

My Web App Journey – Doing an oil change

As mentioned in my last post, I took a bit of a break to focus on some studies and then relocated half way around the world. In that time some of the core technologies I use for the project have evolved as is normally the case. Even the way in which posts in WordPress are authored has changed a little.

I decided this would be would be great motivation for me to upgraded my development environment and work through all the issues it brings with it that impact the project.

Rocky start

So i normally would prefer to start with a fully working state in my code but I suspect i will have many new issues to resolve after I upgrade and it might result in further changes to the underlying design so while it is tempting to first fix the code I will instead download and install the latest environments.

I started by installing NodeJs 64 bit and opted to also install Chocolatey which is basically a software package manager platform that can help with installing of additional software. If you want to be a purist then you can avoid that and just install the core pieces manually.

After a reboot, I opened a command prompt and navigated to the root folder of my project to upgrade some of the packages and dependencies (like Typescript).

Upgrading typescript to the latest is straight forward through the use of the following npm command.

npm install -g typescript@latest

This updates the global package and not the project so I have to update the package as well. I don’t know if i could have skipped updating the global package but it did not hurt. In fact it turns out you can quickly update all the packages in your solution.

npm update
npm install

That sorts out the project as I have not yet started using Angular so luckily I have one less thing to update. That leaves updating MongoDb to the current version. This is made a little easier with Chocolatey. Open a command prompt with Administrative privileges.

choco install mongodb

This automates installing the latest version and updating the windows service to make sure it is running. If you would prefer to manually control the database server then you can go and change the service from automatic to manual. You will also need to update your local path settings if you have an older version installed to make sure the commands resolve to the latest version.

I have now run into the first issue. Upgrading from MongoDB 3.4 straight to 4.2 left me with a database which is not supported with the new version. If i want to keep my existing data I would need to follow the following upgrade path, 3.4 => 3.6 => 4.0 => 4.2.

That sounds like a little too much hard work and as I don’t yet have any data I am willing to wipe and reset. I simply deleted the old db directory and created a new one, fired up the mongo server in standalone mode and it magically created a fresh database.

While i might not yet have an Angular web app I might as well update the CLI at the same time. This is easily done through the use of NPM.

npm install -g @angular/cli

I am now ready to fix the problems the past me left for the future me to solve.

Contributing Research Material

Eaviden. How to update typescript to latest version with npm. Stack overflow. 24 September 2016.

mkevenaar, Chocolatey packages page for MongoDB. Website. 29 February 2020

My Web App Journey – Data Store

In the previous chapter my initial plan was to improve the data repository to leverage MongoDB. I ended up deviating from that plan to focus on some SOLID principles.

This chapter will be about introducing MongoDB connectivity to manage the data persistence of the API.

First Steps

I start by opening my project and going to the terminal window. Making sure that I am currently at the root folder of my project, install the MongoDB dependencies to the project.

npm install MongoDB

This downloads the required dependency files, while updating the project configuration file (package.json) as needed.

A new world

I want to add a new repository that uses a connection to the MongoDB server. The recommended design is to only create a single instance of the client that is reused instead of creating a new connection for every call. Sticking to the SOLID principles I am going to inject this into the repository.

To start I need to define a type that will hold the symbols used for the injection pattern. This will update the types as follows:

/src/types.ts
export const REPOSITORY_TYPES = {
  Transaction: Symbol.for('Transactions')
};

export const CONTROLLERS = {
  Transaction: Symbol.for('Transaction')
};

export const SINGLETONS = {
  Routing: Symbol.for('Routing')
};

export const CONNECTIONS = {
  MongoDB: Symbol.for('MongoDB')
};

The wrong path

Sometimes we don’t realize some choices are not the right ones. Just as I was about to get my new repository in place, I uncovered a flaw in my interface design.

It is too generic.

I was trying to rely on the freedom to move the ability to filter matches away from the repository by relying on predicates. This conflicted with the MongoDB driver which uses a query language.

I need to update my interface to introduce the various methods by which I want to interact with the data. I decided to add some freedom by using regular expressions to allow multiple matches if needed.

/src/models/transaction.ts
export class Transaction {
  id: number;
  amount: number;
  currency: string;
  date: Date;
  description: string;
  source: string;
  type: string;
  category?: string;
}

export interface TransactionRepository {

  /** add a new transaction to the repository.
   * @returns transaction. null if it failed to add
   */
  add(record: Transaction): Transaction;

  /** updates a transaction in the repository.
   * @returns transaction. null if unsuccessful */
  update(record: Transaction): Transaction;

  /** removes a record from the repository.
   * @returns transaction. null if unsuccessful */
  remove(record: Transaction): Transaction;

  /** get all the transactions in the repository.
   * @returns array of transaction. null if no records found
   */
  all(): Array<Transaction>;

    /** finds the first matching Transaction in the repository based on its ID. supports regular expression matching.
   * @returns transaction. null if not found
   */
  getByID(id: string): Transaction;

  /** find transactions in a category. supports regular expression matching.
   * @returns array of transaction. null if no records found
  */
  findByCategory(category: string): Array<Transaction>;

  /** find transactions based on the description. supports regular expression matching.
   * @returns array of transaction. null if no records found
  */
  findByDescription(description: string): Array<Transaction>;

  /** find transactions based on the source. supports regular expression matching.
   * @returns array of transaction. null if no records found
  */
  findByBySource(source: string): Array<Transaction>;

  /** find transactions based on the type. supports regular expression matching.
   * @returns array of transaction. null if no records found
  */
  findByType(type: string): Array<Transaction>;

  /** find transactions between two given amounts (inclusive).
   * @returns array of transaction. null if no records found
  */
  findByAmountBetween(lower: number, upper: number): Array<Transaction>;

  /** find transactions above a given amount (exclusive).
   * @returns array of transaction. null if no records found
  */
  findByAmountAbove(amount: number): Array<Transaction>;

  /** find transactions below a given amount (exclusive).
   * @returns array of transaction. null if no records found
  */
  findByAmountBelow(amount: number): Array<Transaction>;

  /** find transactions at a given amount (inclusive).
   * @returns array of transaction. null if no records found
  */
  findByAmount(amount: number): Array<Transaction>;

  /** find transactions between two given dates (inclusive).
   * @returns array of transaction. null if no records found
  */
 findByDateBetween(lower: Date, upper: Date): Array<Transaction>;

 /** find transactions after a given date (exclusive).
  * @returns array of transaction. null if no records found
 */
 findByDateAfter(date: Date): Array<Transaction>;

 /** find transactions before a given amounts (exclusive).
  * @returns array of transaction. null if no records found
 */
 findByAmountBefore(date: Date): Array<Transaction>;

 /** find transactions on a given amounts.
  * @returns array of transaction. null if no records found
 */
 findByDate(date: Date): Array<Transaction>;
}

When Life Happens

So half way through this step I got started a new course, moved halfway around the world, and well got completely sidetracked.

In the interim angular has gone from version 7.0.3 up to 9.0. We now have the Raspberry Pi 4 which can is recently fully supported through the latest kernel. NodeJs has gone from version v8.12 up to v12.16.

I tried to startup the project which regrettably did not work, initial analysis is linked to my decision to diverge from comparator parameters with highly generic methods to more specific methods. It turns out I still had some changes to make to update my code to use the new interface I had introduced.

This however introduces a typical dilemma when working on a project over such a long time frame. What if you have updated your local environment which means you will need to first revert your development system to match your project and then make the changes or do you bite the bullet and upgrade your project to bring it to the current version in play?

I plan to follow the latter course and update all the elements of the project to use the latest versions. I will hopefully be able to deal with the fallout when i deploy to my hosting system (when it arrives or i build a new one … OpenFaaS pi cluster maybe?) .

So that will conclude this chapter to move on the next chapter which will focus on updating everything.

Contributing Research Material

Piros, Tamas. Sharing a MongoDB connection in NodeJS/Express. Web Article. 13 September 2018.

Hamedani, Mosh. Repository Pattern with C# and Entity Framework, Done Right. Youtube. 15 October 2015.

My Web App Journey – Going Solid

In the previous chapter I made some progress in the design of the controller to handle the basic CRUD operations for a transaction.

The project will grow to include more controllers but right now the transaction controller is lacking data persistence. Now I could design the controller to just connect directly to a database of my choice (such as MongoDB) however I would prefer using the repository pattern to abstract the data implementation.

I also plan on shifting my approach in the direction of Domain Driven Design (DDD) with some SOLID principles. So this chapter will be a detour before adding MongoDB support.

Becoming Independent

I have recently been looking at content on dependency injection and dependency inversion. The goal is to further decouple the design and abstract the design. So I want to have the concept of a repository and my application to make use of this generic notion but never actually instantiating an instance but instead have it “injected” in wherever it is needed. 

I plan to rely mainly on Inversion, to avoid concretions that tightly couple the design of one aspect to another. Of course there is still a need at some point where the concretions need to be defined and mapped together to make the application work. This will be made possible using the Inverisfy module. (It also means this post is going to be a long one).

Install it by running the command in the root of the project

npm install inversify reflect-metadata --save

Inversify relies on some black magic to make the injection possible which is not enabled by default. You need to inform the typescript compiler to enable the use of experimental decorators and to include the types from the reflect-metadata module. This gives us the following:

{
    "compileOnSave": true,
    "compilerOptions": {
      "experimentalDecorators": true,
      "emitDecoratorMetadata": true,
      "moduleResolution": "node",
      "types": ["reflect-metadata"],
      "target": "es6",     //default is es5
      "module": "commonjs",//CommonJs style module in output
      "outDir": "dist"  ,   //change the output directory
      "resolveJsonModule": true //to import out json database
    },
    "include": [
      "src/**/*.ts"       //which kind of files to compile
    ],
    "exclude": [
      "node_modules"     //which files or directories to ignore
    ]
 }

With that out of the way I can focus on making my dream come true. As mentioned earlier the concretions get managed from a single point in the project.

Tidying up

There are a few changes I need to make to the project t along with the use of templates to help generalize the design. Rather than relying on concretions, I first need to define an interface to describe how to interact with a repository for the transactions. I could create a new file for the interface but chose to append it to the existing model for the transaction as it is technically tightly coupled to the design of a transaction. I included some TypeDoc comments to improve the documentation in the intellisense to know the expected behavior any repository implementing the interface will honor.

export class Transaction {
  id: number;
  amount: number;
  currency: string;
  date: Date;
  description: string;
  source: string;
  type: string;
  category?: string;
}

export interface TransactionRepository {
  /** finds the first matching Transaction in the repository
   * @returns transaction. null if not found
   */
  find(filter: (Transaction) => boolean): Transaction;
  /** add a new transaction to the repository
   * @returns transaction. null if it failed to add
   */
  add(record: Transaction): Transaction;
  /** updates a transaction in the repository
   * @returns transaction. null if unsuccessful */
  update(record: Transaction): Transaction;
  /** removes a record from the repository
   * @returns transaction. null if unsuccessful */
  remove(filter: (Transaction) => boolean): Transaction;
  /** finds all the matching transactions in the repository
   * @returns array of transaction. null if nothing found
   */
  findAll(filter: (Transaction) => boolean): Array<Transaction>;
}

From there I need to define some symbols that will be used for handling the dependency injection. These will later provide the means to manage the binding to the instances that will be injected when the application is running.

export const REPOSITORY_TYPES = {
  Transaction: Symbol.for('Transactions')
};

export const CONTROLLERS = {
  Transaction: Symbol.for('Transaction')
};

export const SINGLETONS = {
  Routing: Symbol.for('Routing')
};

I now need to update the controller to move the data array out and replace it with an injected repository. I also need to encapsulate it into a class which will allow the controller be instantiated at run time as it gets injected.

import {Request, Response, Router} from 'express';
import {Transaction, TransactionRepository} from '../models/transaction';
import { REPOSITORY_TYPES } from '../types';
import { inject, injectable } from "inversify";

@injectable()
export class TransactionController {

  private repository: TransactionRepository;

  private router: Router = Router();

  public constructor(@inject(REPOSITORY_TYPES.Transaction) repository: TransactionRepository) {

    this.repository = repository;

    this.router.get('/:id', (req: Request, res: Response) => {

      let id = req.params.id;
      let transaction = this.repository.find(x => x.id == id);
    
      if (transaction == null)
        res.status(404).send();  // Record not found
      else
        res.status(200).send(transaction);
    });
    
    this.router.post('/', (req: Request, res: Response) => {
    
      let transaction: Transaction = {
        id: 0,
        type: req.body.type,
        date: new Date(req.body.date),
        currency: req.body.currency,
        amount: req.body.amount,
        source: req.body.source,
        description: req.body.description
      };
    
      this.repository.add(transaction);
    
      res.status(200).send(transaction);
    
    });
    
    this.router.put('/:id', (req: Request, res: Response) => {
    
      let transaction = this.repository.update({
        id:  req.params.id,
        amount:  req.body.amount,
        currency:  req.body.currency,
        date:  req.body.date,
        description:  req.body.description,
        source:  req.body.source,
        type:  req.body.type,
        category:  req.body.category
      });
    
      if (transaction == null) 
        res.status(404).send();  // Record not found

      res.status(200).send(transaction);
    
    });
    
    this.router.delete('/:id', (req: Request, res: Response) => {
    
      let id: number = req.params.id;
    
      if (this.repository.find(x => x.id == id) == undefined)
        res.status(404).send();  // Record not found
    
      this.repository.remove(x => x.id != id);
      
      res.status(200).send('Transaction deleted');
    
    });
  }
  
  public getRouter() : Router {
    return this.router;
  }

}

Getting Real

So the next step is to implement the TransactionRepository interface. Using the array from the controller I can create a basic repository. This will allow it to be injected into the design. This design will also allow me to swap out one repository for another without having to change any other code (unless I change my interface design).

import {injectable} from 'inversify';

import {Transaction, TransactionRepository} from '../models/transaction';

@injectable()
export class TransactionArrayRepository implements TransactionRepository {
  private next_id: number = 5;

  private transactions: Array<Transaction> = [
    { id: 1, type: 'DEBIT', date: new Date('2018-12-28'), currency: 'USD', amount: -10.00, source: 'DEBIT_CARD', description: 'Soup' },
    { id: 2, type: 'DEBIT', date: new Date('2018-12-28'), currency: 'USD', amount: -15.00, source: 'DEBIT_CARD', description: 'Dessert' },
    { id: 3, type: 'DEBIT', date: new Date('2018-12-28'), currency: 'USD', amount: -20.00, source: 'DEBIT_CARD', description: 'Drinks' },
    { id: 4,type: 'DEBIT',date: new Date('2018-12-28'),currency: 'USD',amount: -5.00,source: 'DEBIT_CARD',description: 'Tip' }
  ];

  public find(evaluator: (Transaction) => boolean): Transaction {
    return this.transactions.find(evaluator);
  }

  public findAll(evaluator: (Transaction) => boolean): Array<Transaction> {
    if (evaluator == null) return Object.assign([], this.transactions);

    return this.transactions.filter(evaluator);
  }

  public add(record: Transaction): Transaction {
    let entry = Object.assign({}, record);
    entry.id = ++this.next_id;
    this.transactions.push(entry);
    return entry;
  }

  public update(record: Transaction): Transaction {
    let entry = this.find(x => x.id == record.id);
    if (entry == null) return null;
    let updatedEntry = Object.assign(entry, record);
    return updatedEntry;
  }

  public remove(evaluator: (Transaction) => boolean): Transaction {
    this.transactions = this.transactions.filter(evaluator);
    return null;
  }
}

Time to Re-Route things

With the changes to the TransactionController I need to tweak the route mapping. I also need to tweak the design to make it inject-able. It’s not fully in line with SOLID but its a step in the right direction.

import {RootController} from './controllers/root';
import {TransactionController} from './controllers/transaction';
import { injectable, inject } from 'inversify';
import { CONTROLLERS } from './types';

@injectable() 
export class Mapper {

  private _TransactionController: TransactionController;

  public constructor(@inject(CONTROLLERS.Transaction) transactionController: TransactionController) {
    this._TransactionController = transactionController;
  }

  public addRoutes(app): void {
    app.use('/', RootController);
    app.use('/transaction', this._TransactionController.getRouter());
  }

}

Binding things together

Dependency injection relies on mapping the types to the class implementation. This is what allows the ability of implementing say a new repository without having to update every its needed, instead I just update the mapping in a single place.

import "reflect-metadata";
import {Container} from 'inversify';
import {TransactionController} from './controllers/transaction';
import {TransactionRepository} from './models/transaction';
import {TransactionArrayRepository} from './repositories/transactionArray';
import {Mapper} from './routes';
import { CONTROLLERS, SINGLETONS, REPOSITORY_TYPES } from "./types";

const container = new Container();
container.bind<TransactionController>(CONTROLLERS.Transaction).to(TransactionController);
container.bind<Mapper>(SINGLETONS.Routing).to(Mapper);
container.bind<TransactionRepository>(REPOSITORY_TYPES.Transaction).to(TransactionArrayRepository);

export {container};

The final piece

Time to wrap it up all together in the main entry point where the single instance of inversify config gets created that is used and to ensure the mappings are performed.

import * as express from 'express';
import { Mapper } from './routes';
import * as bodyParser from 'body-parser';
import { SINGLETONS } from './types';
import { container } from './inversify.config';

class App {

    public app: express.Application;
    private RouteMapper: Mapper =  container.get<Mapper>(SINGLETONS.Routing);

    constructor() {
        this.app = express();
        this.config();
        this.RouteMapper.addRoutes(this.app);
    }

    private config(): void {
        this.app.use(bodyParser.json());
    }

}

export default new App().app;

With those changes, not much appears to have changed on the surface. In the next chapter I will write a new repository class that works with MongoDB and change out the old one.

Contributing Research Material

Wendel,  Erik.  Patterns — Generic Repository with Typescript and Node.js. Web Article. 20 March 2018.

Jansen, Remo H. Implementing SOLID and the onion architecture in Node.js with TypeScript and InversifyJS. Web Article. 13 April 2018.

Abrickis, Andres. Typescript dependency injection: setting up InversifyJS IoC for a TS project. Web Articel. 9 July 2018.

Taylor, Jason. Clean Architecture with ASP.NET Core 2.1. YouTube. 18 October 2018.

My Web App Journey – Controllers

In the previous chapter I  refactored the design to break up the logic into multiple files while laying down the foundation for the controller pattern.

This chapter will focus on getting a controller that will implement basic CRUD operations that respond to different HTTP methods.

For now I will contain the data inside the controller design but later I want to try abstract to a data repository to manage the data IO and persistence.

Show me the Money

Time to build out the controller that will handle the different CURD operations for a single transaction.  Create a new typescript file in the controllers folder with just the bare bones for now.

controllers/transaction.ts

import {Request, Response, Router} from 'express';

const router: Router = Router();

export const TransactionController: Router = router;

Next we need to map the controller to an endpoint to direct calls received to the controller. I updated the file in the main director that manages the routes to the now look like this.

routes.ts

import {RootController} from './controllers/root';
import {TransactionController} from './controllers/transaction';

class Mapper {

  public addRoutes(app): void {
    app.use('/', RootController);
    app.use('/transaction', TransactionController);
  }

}

export const RouteMapper: Mapper = new Mapper();

The endpoint is now mapped but if you try to access it, you are only going to get an error because the controller does nothing for now. My first goal is to create a dummy array to store data and then introduce a method to handle a get that will return a given data element. The transaction controller now looks like this.

controllers/transaction.ts

import {Request, Response, Router} from 'express';

const router: Router = Router();

var transactions = [
  { id: 1, type: "DEBIT", date: new Date('2018-11-28'), currency: 'USD', amount: -10.00, source: "DEBIT_CARD", description: "Soup" },
  { id: 2, type: "DEBIT", date: new Date('2018-11-28'), currency: 'USD', amount: -15.00, source: "DEBIT_CARD", description: "Dessert" },
  { id: 3, type: "DEBIT", date: new Date('2018-11-28'), currency: 'USD', amount: -20.00, source: "DEBIT_CARD", description: "Drinks" },
  { id: 4, type: "DEBIT", date: new Date('2018-11-28'), currency: 'USD', amount: -5.00, source: "DEBIT_CARD", description: "Tip" }
];

router.get('/:id', (req: Request, res: Response) => {
  let id = req.params.id;
  var transaction = transactions.find(x => x.id == id);

  if (transaction == null)
    res.status(404).send(); // Record not found
  else
    res.status(200).send(transaction);
});

export const TransactionController: Router = router;

One thing to note is that I did not specify in the controller the “/transaction/?” prefix because we handled that in the route mapping. So in the controller we treat the mapping as relative. The other part is extracting the parameters from the query string by labeling the value and then referencing it to filter my dummy data before returning it.

I can now compile my project and start it up and using Postman or even a web browser I submit a HTTP GET request to get one of the transactions.

midas_201811282356

Giving Back

Now that we have implemented methods for getting a single transaction, the next step is creating a new transaction. This is achieved by using the HTTP POST method to send a transaction to the server to add to the list of transactions.

To post JSON formatted data to the server, requires using “body-parser” to enable parsing of JSON in the body of the request. So we need to update the app giving us the following.

app.ts

import * as express from 'express';
import { RouteMapper } from './routes';
import * as bodyParser from 'body-parser';

class App {

    public app: express.Application;

    constructor() {
        this.app = express();
        this.config();
        RouteMapper.addRoutes(this.app);
    }

    private config(): void {
        this.app.use(bodyParser.json());
    }

}

export default new App().app;

With that in place we can now introduce a method to handle HTTP POST calls to the transaction controller. I also added a counter to handle unique id’s to assign to new transactions that are added to the list. When the post method is called it creates a new transaction and takes the values from the body to create the values before pushing it on to the transaction list and then returns the created transaction with its id to confirm the transaction created successfully.

controllers/transaction.ts

import {Request, Response, Router} from 'express';

const router: Router = Router();

var next_id = 5;

var transactions = [
  { id: 1, type: "DEBIT", date: new Date('2018-11-28'), currency: 'USD', amount: -10.00, source: "DEBIT_CARD", description: "Soup" },
  { id: 2, type: "DEBIT", date: new Date('2018-11-28'), currency: 'USD', amount: -15.00, source: "DEBIT_CARD", description: "Dessert" },
  { id: 3, type: "DEBIT", date: new Date('2018-11-28'), currency: 'USD', amount: -20.00, source: "DEBIT_CARD", description: "Drinks" },
  { id: 4, type: "DEBIT", date: new Date('2018-11-28'), currency: 'USD', amount: -5.00, source: "DEBIT_CARD", description: "Tip" }
];

router.get('/:id', (req: Request, res: Response) => {
  let id = req.params.id;
  var transaction = transactions.find(x => x.id == id);

  if (transaction == null)
    res.status(404).send(); // Record not found
  else
    res.status(200).send(transaction);
});

router.post('/', (req: Request, res: Response) => {
  
  var transaction = {
    id: next_id++,
    type: req.body.type, 
    date: new Date(req.body.date), 
    currency: req.body.currency, 
    amount: req.body.amount, 
    source: req.body.source, 
    description: req.body.description
  };
  
  transactions.push(transaction);

  res.status(200).send(transaction);

});

export const TransactionController: Router = router;

So now when we post some JSON data to the endpoint, we should get a 200 response with the same details with the new id included to confirm the record was created successfully. You can also try perform a GET call for the new id which should return your record.

Have some Class

So I want to make updates to the transaction which includes classification to help organize them. My current design relies on a generic array that infers the properties of a transaction. The challenge this introduces is when it comes to adding new fields to accept new data.

To remedy this problem, I will start to model my data using classes to allow the declaration of future data elements not required initially but at a later stage. So we create a folder alongside the controllers called “models” and create inside a new class to define the Transaction. 

One thing I had to make sure is to allow some parameters to be optional so that they don’t need to specified when creating the array and it helps to keep the some of the code lean.

models/transaction.ts

export class Transaction {
  id: number;
  amount: number;
  currency: string;
  date: Date;
  description: string;
  source: string;
  type: string;
  category?: string;
}

With this change I can update the controller to use stronger typing and to implement a method to handle a HTTP PUT request. While I am at it, the final method supports deleting a transaction by using HTTP DELETE method call.

controllers/transaction.ts

import {Request, Response, Router} from 'express';
import {Transaction} from '../models/transaction';

const router: Router = Router();

let next_id: number = 5;

let transactions: Array<Transaction> = [
  { id: 1, type: 'DEBIT', date: new Date('2018-11-28'), currency: 'USD', amount: -10.00, source: 'DEBIT_CARD', description: 'Soup' },
  { id: 2, type: 'DEBIT', date: new Date('2018-11-28'), currency: 'USD', amount: -15.00, source: 'DEBIT_CARD', description: 'Dessert' },
  { id: 3, type: 'DEBIT', date: new Date('2018-11-28'), currency: 'USD', amount: -20.00, source: 'DEBIT_CARD', description: 'Drinks' },
  { id: 4,type: 'DEBIT',date: new Date('2018-11-28'),currency: 'USD',amount: -5.00,source: 'DEBIT_CARD',description: 'Tip' }
];

router.get('/:id', (req: Request, res: Response) => {
  let id = req.params.id;
  let transaction = transactions.find(x => x.id == id);

  if (transaction == null)
    res.status(404).send();  // Record not found
  else
    res.status(200).send(transaction);
});

router.post('/', (req: Request, res: Response) => {

  let transaction: Transaction = {
    id: next_id++,
    type: req.body.type,
    date: new Date(req.body.date),
    currency: req.body.currency,
    amount: req.body.amount,
    source: req.body.source,
    description: req.body.description
  };

  transactions.push(transaction);

  res.status(200).send(transaction);

});

router.put('/:id', (req: Request, res: Response) => {

  let id: number = req.params.id;
  let transaction = transactions.find(x => x.id == id);

  if (transaction == null) 
    res.status(404).send();  // Record not found

  transaction.amount = req.body.amount || transaction.amount;
  transaction.currency = req.body.currency || transaction.currency;
  transaction.date = req.body.date || transaction.date;
  transaction.description = req.body.description || transaction.description;
  transaction.source = req.body.source || transaction.source;
  transaction.type = req.body.type || transaction.type;
  transaction.category = req.body.category;

  res.status(200).send(transaction);

});

router.delete('/:id', (req: Request, res: Response) => {

  let id: number = req.params.id;

  if (transactions.find(x => x.id == id) == undefined)
    res.status(404).send();  // Record not found

  transactions = transactions.filter(x => x.id != id);
  
  res.status(200).send('Transaction deleted');

});

export const TransactionController: Router = router;

I now have the basic methods in place to handle the CRUD functions for a transaction. The next step is work on integrating the methods with a MongoDB  to handle the persistence which I plan to tackle in the next chapter.