Skip to content

How easily can backend solution break your fancy front-end

When working on a multi-purpose IT application, there simply is no more crucial phase of the project than the initial analysis. There is a tendency to jump-start certain projects by defining user stories perceived as “easy“ and deliver quick-win functionalities. This, among other things, serves as an assurance that the team is able to deliver from day one, but if not done properly might bring more trouble in the future, because working on a single task without proper knowledge of the whole product is a certain way to fail in the long run.

It is also the reason why multidisciplinary teams exist and why user experience experts are no longer perceived as a money-generating buzzword but rather valuable addition to new (IT non-exclusive) projects.

Let’s move aside from theory and explore, as the title suggests one very easy real-world example. The trouble that could arise from improper stitching of backend and frontend.

Scenario

Imagine a highly customized ServiceNow portal with a lot of built-from-scratch widgets. Part of the customer portal experience was supposed to be the management of entitlements. The back-end part of the epic was to get those entitlements and related table information to the database, the front-end part was to create a pleasant user interface to manage orders and approvals. One of the key features of the new UI was a custom autocomplete form suggesting information from the dedicated backend tables. The story has been described and reviewed with the customer, the solution developed and successfully tested with a response time of around 250 milliseconds (for the suggested text to appear). But when moved to production, the feature suddenly broke.

What happened?

It turned out that the only problem was an insufficient sample size. No real data import has ever been tested by the customer. Since company policy rules have prevented usage of the real production file, the source table has been populated only with hundreds of records instead of millions in production. And since advanced grouping and filters have been used, the performance of the autocomplete feature dropped to somewhere around 20 seconds, rendering the whole functionality useless.

What could have prevented that?

Two things. First: Keep asking questions. Unfortunately for us, we already had answers to most of these questions from the beginning. So the second part of the answer was more important to fix the issue. But let’s follow the line of the question just for the sake of outlining the solution: How many records will there be available for the form? What is the source of the table? Do we have a different more condensed source? Can we create one? Will it be possible to obtain it from your subsidiary? Will we need to source different tables in the future for different forms? 

Second: Keep an eye on performance from the beginning. The rule of thumb should be: if something is supposed to be big in production (as the data source) make it ten times bigger in testing environment. See how it holds up and when it eventually breaks. By simply populating the table in advance, we could have prevented:

  •  unhappy stakeholders
  •  defect fixing time
  • several meetings
  • additional development

Sometimes when you hear from the customer that something cannot be done too often (aka obtaining the real data file), you may forget that you developers could work on tasks that are not implicit in the delivery. For example, you can populate the table by yourself. Even if there are multiple strict rules to be applied to guarantee the uniqueness of the records, it is better to spend a few hours making the useful population script rather than facing production issues down the road with very limited time to solve them.

With the knowledge of similar issues, you should then be able to explain to the client that described tasks are a crucial part of the development and that the additional time for something not visible in the final product may save much more money later.

Also, similar scripts or functionalities could be used for different stories or better yet, for different clients, which could save you more time in the future. You could just add them to your company’s knowledge base if you have a shared space for developers.

The solution:

In the end, the solution was a mixture of both of these principles. We reopened certain closed topics regarding data sources and understood enough to choose a different set of source files (with a much lower number of records) and we performed tests on randomly populated tables that mimicked the production state as close as possible. With the higher volume of dummy records, we were also able to re-evaluate and simplify the used sorting mechanism.

So, what’s the takeaway?

Don’t underestimate the quick-win stories and the initial analysis, especially when working on a customized solution or on a scoped application. The data model has to be sound and aligned with not only current requirements but flexible enough to serve for future development as well. It’s no use if your front-end and backend developer have finished their tasks and they have no clue what information are they working with or simply, how the gears should work together. The wholesome understanding of the project should not be a task of a project manager, project owner or development lead but rather of the full team.

Check how many of these points could you cross out when developing new functionalities:

  • Keep an eye on performance from day one
  •  Don’t leave certain topics for later in order to generate a quick win
  • Before designing front-end, make sure you have sufficient knowledge of your backend
  • Introduce developer sessions to assure that all developers are synchronized on their tasks
  • Insist on quality of supplied data
  • Insist on quality of supplied data (you can never raise it enough)