“You can't control what you can't measure": slow response times, sluggish processing of requests and time-outs in a project workflow require rapid optimization of system performance. In order to be able to measure the performance of an Atlassian toolchain, individual tests must be carried out. Background: Every environment and system usage is unique. catworkx relies on the pairing of open source software and its own internal tools, such as catworkx SPIN (stress app), when controlling performance values. This allows behavioral information of a specific system to be accurately mapped over a defined time frame, while maximizing the stress level of the Atlassian instance.

Why performance is decisive
“Performance Engineering” is the collective term in IT for the development of solutions for non-functional requirements, such as flow, delays or memory requirements. This means that the solutions developed must be able to withstand the increase in user requirements and at the same time meet the users' speed expectations. Why is this important? Because users have no patience. There are only three seconds or less to hold the user's attention. If this hurdle is not overcome, there is a risk that the user will no longer be “there”. In other words, there is a high level of rejection of the solution, the process is avoided and the customer is dissatisfied.
Nowadays, business processes must function reliably, quickly and with a minimum of interruptions. This enables business expectations to be met and the ability to act to be maintained. Measuring performance is the key to identifying potential for improvement. This is the only way to enable business growth. This connection is the basis for the sentence: “You can't control what you can't measure” (Tom DeMarco).
From the idea to realization:
This realization was the catalyst for catworkx to invest in measuring the performance of the Atlassian toolchain. As centralized processes and business functions in medium and large organizations increase the need for workflow management, documentation, collaboration and compliance policy implementation, all parts of the toolchain need to be scrutinized.
In the following example, the catworkx team took care of a Jira system and put together a tool palette for it, with which business customers can clearly visualize the essential information. This allows business managers and IT staff to understand and identify bottlenecks or stumbling blocks that hinder service. catworkx has applied its skills in improving the performance and increasing the system stability of the Atlassian toolchain many times. This knowledge and subsequent set-up is the foundation of how we help today's clients get rid of yesterday's problems and prepare for tomorrow's business needs.
Tool chain and use:
Well aware that every environment and every system usage is unique and requires individual investigation, the starting point for each individual system must be scrutinized. After evaluating several tools for our needs, catworkx came across a tool set consisting of Gatling (open source testing framework), InfluxDB (open source TSDB database for storing large amounts of data from time measurement series) and Grafana (open source metrics dashboard), which meets our requirements in terms of scalability and practicability.
These external tools are bundled with our own internal tools, such as catworkx SPIN (stress app), to collect behavioral information of a specific system - while maximizing the stress level of the Atlassian instance - over a defined period of time.
Customer example:
The following example is about a customer system (Jira) that has attracted attention due to slow response times, sluggish processing of requests and time-outs. After the first cycle, we found out that every single change to the tool's setup or its configuration needs to be cross-tested to investigate its benefits. Changing more than one condition at a time did not prove to be the right approach, as overlaps and side effects can negatively influence the measurements.
1. Assessment:
The initial analysis showed a system that had been in use for a long time and had never been subjected to performance optimization in one form or another. The result presented a slow system, with long response times and a user experience at the lower end.

2. Assessment:
Nachdem wir gesehen haben, dass die Antwortzeiten des Systems sich verbesserten, sind wir zur dritten Testrunde übergegangen, der Datenbank-Optimierung.

3. Assessment:
Wir fanden heraus, dass die angewendeten Datenbank-Parameter und die verwendeten JDBC-Treiber auf dem Kundensystem verbesserungswürdig sind. Als nächstes machten wir den offensichtlichen Schritt: Wir gaben dem System schrittweise mehr Speicher.

4. Visualisierung der Ergebnisse mit Grafana-Dashboard:
Das angepasste Grafana-Dashboard ermöglichte uns, eine Messung auf einzelne, besondere Entitäten und Werte herunterzubrechen und so ein Maximum an Transparenz und Visualisierung zu erhalten. Besonders die Interferenz der verschiedenen System- und Softwarebereiche konnte einfach über dieses Dashboard aufgedeckt werden.

Conclusion:
These steps increased the overall system performance and fluidity to an acceptable level, allowing the customer to continue using their system with the optimized parameters. The measures achieved an effect of 30 to 60 percent in individual aspects. Continuous monitoring ensured that overlapping side effects were removed from consideration. The result was a satisfied customer who did not have to buy a new (larger) system to keep pace with their business requirements.