Skip to main content

Canvas Philosophy

In this article we will outline intended use and best practices for development on the Canvas platform:

1) Separate workflows for separate tasks​

In general, we recommend having a separate workflow for each integration task whenever possible. It is easier to support, develop, and extend the integrations when they are split into different working units.

However, the bundled integrations created by ConnectMyApps often do perform multiple tasks per workflow, which is dictated by the need to simplify the template installation for the end client.

For example, for a workflow that transfers orders from a webshop or CRM to an ERP system, some essential data is needed to make the order data meaningful, so it may be useful to include the transfer of customers and products in the same workflow.

If the end client who will manage and set up the workflow is a small company or group of people without any technical/integration understanding, rich templates that perform several steps can be a good choice. Our integration experts are available to advice on the best course of action.

2) Scheduling the workflow​

There are two preferred scheduling patterns:

  • Ask the source application as often as possible for new changes, in cases when the API can return changed data since the last run (known as ‘’searching on the delta’’). In this case, it is normal to set the workflow running interval from 5 minutes to one hour, depending on how quickly the data must be transferred to the target system.

    It is advisable to schedule the integration at the lowest necessary frequency. For example, a workflow between a POS and an ERP to transfer daily sales does not need to run every five minutes. Rather, it is more efficient to run it once per day at close of business. Similarly with a Payroll integration, which is typically processed once per month.

  • If the source API cannot return the "delta" data export, and there is a need to query the whole dataset each time the integration runs, the workflow should be set to schedule 1 hour and more, depending on the dataset size. For extremely small datasets, schedules of below than 1 hour are also possible. We ask to use common sense and take into account the usual workflow execution time.

    If workflow execution time is 30 minutes with the scheduler set to run it once per 30 minutes, it means it will load one of the workflow processors to 100%, which will be a violation of our fair usage policy and will be resolved by either increasing workflow fee when this load is necessary or you will be asked to change the integration or reduce the scheduler frequency. Our integration experts are available to advice on the best course of action.

3) Building the integration​

On the workflow builder page, you will drag blocks to create actual integration. In Canvas, blocks are meant to be modular and single purpose. Although it is possible to run the whole integration in one monolithic mega-block, we instead recommend creating new, separate blocks as needed in keeping with this philosophy.

As a rule of thumb, you should isolate getting data/mapping data (client-related business logic)/posting data into separate blocks. This will provide for better maintainability and extensibility of the workflows.

4) Mapper block or custom code block?​

Mapper and custom code blocks are both great tools for placing the mapping logic, but which one should be selected for the specific integration?

Although it is tempting to always use mapper by default, because of its simplicity and visual mapping interface, let's point out a few scenarios where custom code can be preferred:

  • Many if/else statements. In general, if/else conditions are the hardest to handle within the mapper. If logic is going to be full of conditional operators, custom code is the better tool to use.
  • If integration is going to be heavy on business logic.
  • Many nested loops in some cases can be hard to handle in the mapper.

The mapper works best when:

  • Many fields must be included in the mapping
  • The business logic layer is thin or absent
  • When simple Excel-like functions need to be applied to the fields

The mapper is also extremely useful in prototyping and building template integrations. Even in complex integration cases, in the well-organized workflow, there are still good uses for the mapper, especially when it is possible to isolate business logic into custom code, before or after the mapping. An integration created this way would be extremely easy to support and modify, even for non-technical people.

5) Maintainability is key​

Comments in the code are welcomed. We recommend adding comments to all numbers/strings that are used in the data mapping:

let vt_tpe = 3;

The above comments have obvious meaning at the time for the developer who wrote it, but over time, memories fade and subsequent developers may not understand the context or meaning of comments like this. Changing this line to the following would pay off in the future:

//Vat type. Possible values 1 = Standard rate (20%) 2 = Reduced rate (5%) 3 = Zero rate (0%)` `// It was confirmed with the client in ticket #3145 that the vat rate is hardcoded to 3 permanently

let vt_tpe = 3;

Let's imagine a real-life scenario, and one we have experienced many times working with clients who have legacy integrations developed by other integrators.

Developer A and Developer B have equal technical skills and both work on a professional level with APIs.

They both were tasked to make an integration that will transfer employees from a HR system to a Payroll system.

Developer A spent 14 hours and built a workflow in one ‘’mega block’’ where they mixed obtaining of the employees, client-specific business logic, and posting the data. No comments were included and no logging was activated.

Developer B used 18 hours and separated each logical part of the integration into separate blocks, added comments to the mapping steps, and added logging to spot the corrupted/incomplete data.

After one year both developers leave the company.

In the "best case" scenario, the client requests minor changes to the integration after a year. Integration B would be easily extended, but integration A has become a black box, with a high risk of breaking it or transferring corrupted data to the target system after performing changes.

In the "worst case" scenario, in 3 years the Payroll system introduces a new version of the API with different field names, and sunsets the legacy version. In this case, migration of the entire integration is required.

Over the lifespan of the integration project, there is a high risk that people who built the integration, including the business logic, will be not available or they would not remember the details.

In the integration A scenario, there are likely to be high costs and frustration from the client, and in the worst case, the integration will have to be rebuilt from scratch.

In the integration B scenario, new developers will be able to change the block that posts data to the Payroll system and use the comments in the mapping code block to distinguish the field names in the new version of the API that provides a similar logic. After a short round of testing, the integration will be ready to work with the new API.