Event Sourcing with Apache Kafka

Let us talk about what it takes to build a system with Event Sourcing with Apache Kafka.

Disclosure: it is possible to do for event sourcing without aggregates, when using transactions API from Kafka for OCC.

For a long time, there was a consensus that Kafka and Event Sourcing are not compatible with each other. This came from a few limitations:

  • Apache Kafka is a highly-available message bus that favours availability over consistency

  • Kafka is not designed to work with many topics. Thousands are ok, but millions would not work. So it is not possible to address millions of individual aggregate streams for load and store operations.

  • Kafka is an eventually-consistent system, you cannot implement strong consistency guarantees on top of it.

  • Default schema management in the Kafka ecosystem was designed to have a message type per topic. Work-arounds with unions and custom envelopes were always possible, but this goes against the grain of kafka streams

So it might look like there is no way of working with Event Sourcing.

Before we dive into why this is not the case, we need to take a closer look at the history: There are actually two flavours of Event Sourcing: "Plain" and "with Aggregates". They have different implementation constraints.
 

History of Event Sourcing

Plain Event Sourcing was described by Martin Fowler in 2005, along with Domain Events.

It describes systems which capture all changes to their application state as a sequence of events. Such systems have many benefits:

  • replicate databases between different locations

  • replay events to reconstruct database state at any moment in time

  • change database schema at will

  • introduce reactive integration patterns, where different components react to specific events of interest.

In essence, one could implement this flavour of event sourcing by:

  • maintaining a database (SQL or NoSQL);

  • for each change in the database - generating an event that could be used to reproduce that change (it is easier to achieve, when changes are driven by events from the start);

  • storing all events by appending them to an event log;

  • optionally using that log to rebuild (source) database state from scratch.

However, plain Event Sourcing did not provide a mental model about structuring applications in a complex domain. This came from a different area - Domain-Driven Design (DDD).

In August 2003, Erik Evans published his fantastic blue book about tackling complexity in the heart of software. It was called "Domain-Driven Design". The book explained how to make design of application match the mental model of the domain you are dealing with.

DOMAIN-DRIVEN-DESIGN BY ERIC EVANS

Among patterns, DDD included "Aggregates" - "a collection of related objects that we want to treat as a single unit". Aggregates were pictured as a tree that has a root and boundary. The root would be used to load to find and address aggregate in a database, while the boundary also prescribed how far would the database transaction span.

"Domain-Driven Design" did not cover domain events or event sourcing at this point.

Later, Eric Evans interviewed Greg Young about Event Sourcing at QCon SF 2007. This was one of Greg’s earliest explanations of Event Sourcing. There were a lot of materials and a great course later.

Having two visionaries discuss these topics together, cemented together relation between Domain-Driven Design with Event Sourcing through Aggregate pattern. This approach, frequently served together with CQRS, explained how to design, implement and evolve a system with event sourcing.

The solution was a very logical one: we take a classical aggregate that was originally stored in the database. However, instead of storing it as a tree of linked database records, we would store it as a sequence of domain events that capture all past changes. We will replay these events on loading to project them into the state needed for making new decisions. Changes will be appended in form of the new events.

This approach:

  • allowed to gain all the benefits of plain event sourcing;

  • provided a practical guidance on implementing event sourcing: start with the DDD book, but instead of doing plain Aggregates, do "Aggregates with Event Sourcing"

Later on Greg delivered numerous course on event sourcing and also started Event Store - company and a dedicated database perfectly suited for storing aggregates as a sequence of past events. It could handle millions of individual aggregate streams without problems.
 

Event Sourcing with Aggregates

Let us get back to the original question - implementing Event Sourcing with Kafka.

If we are talking about Event Sourcing with Aggregates, Apache Kafka alone (!) is not a very good fit.

Number of distinct aggregates could easily reach millions in a system. If we were to store them as event streams, that would mean millions of individually addressable event streams.

However, the number of topics in Kafka is somewhat limited, so we cannot use it as a naive event store. If we really want to use ES with Kafka, we'll need to somehow mix (multiplex) aggregate event streams. Things can get messy from here on.

Default schema management in Kafka ecosystem favours "one message schema-per-topic" approach. Chances are that different message types will go to different topics. This makes it easier to build streaming solutions on top of that, but causes:

  • message bloat - since events are almost always processed out of their context, they need to carry a lot of context with them

  • message fragility - larger the event schema is, higher is the change that it will have to evolve to accomodate event more changes

  • overall system complexity - whenever one needs to make a decision based on multiple different events, different kafka topics would need to be joined together.

  • race conditions - events on different topics will be delivered and processed at different speeds, smearing initial consistency.

In essence, we end up with "Kafka as a document database with strongly-typed streams", instead of the event log.

Long story short Event sourcing is not a good fit for Apache Kafka, as long as we are talking about "Event Sourcing with Aggregates".

Let us take a look at "Event Sourcing without Aggregates" or just plain "Event Sourcing".
 

Event Sourcing

Plain event sourcing has a long history in databases. Most of the databases maintain a transaction log (or write-ahead log). This log is used to replicate changes from one database to replicas, where it is materialized to create exactly the same state.

Kafka is good fit for log shipping and replication. We can still use domain events as a foundation element in our design, just making sure that all events (for the same app/shard/tenant) go to the same topic and partition.

Whenever there is a change:

  • generate domain event;

  • apply event to the DB (project/materialize);

  • write to the event log.

DB is always driven only from events and can be always replayed from events. This is how new versions are deployed: launch a new replica, let it replay events, then switch traffic at the load balancer.

There is no need to have individually adressable aggregates in this approach, our database is the unit of work. This is similar to how database engines themselves have a single transaction log per database.

Using Kafka for event sourcing would also require capability to "enforce invariants". In other words, this is ability to provide certain guarantees about the state that goes into the event stream. For example:

  • banking: person can have many withdrawals, but he can't withdraw more than 1500 EUR per day;

  • warehouse: items can be taken from the inventory, but inventory can never go negative;

  • reservations: seats could be reserved, but a single seat will never be reserved by more than one person.

Our application needs to enforce these invariants even in cases when there are multiple application instances that are trying to write events at the same time - concurrently (this happens frequently enough in deployments with high availability).

We can do that either by preventing conflicts from being recorded in the event log or by resolving them afterwards.

Here are some ways:

  1. Optimistic Concurrency Control, enforced by the event store. Conflicting writes will not happen, because only one writer will be able to append event at a time. The other will get a concurrency error, will have to read the latest event and retry. Kafka doesn't support this, but many event store implementations provide this out-of-the-box.

  2. Using Kafka Transactions API to turn weak leadership into strong leadership. This will ensure that only one application instance (leader) can write to a specific partition at any given moment. Invariants are preserved when publishing. Conflicts don't happen, because there is only one event published at any point. Check out Kafka Transactions API or this insightful blog post for more details.

  3. Way of CRDT. Let’s dive in more detail here.

It is also possible to use CRDT approach (Conflict-free replicated data type) to resolve conflict after they were recorded in the event log. It relies on the fact that Kafka will order messages anyway, even if both writes come almost at the same time.

We could pick either a simple resolution strategy - the first event to advance the event log to the next version wins, the other one - is rejected. All writers are reading their topic anyway (because they do not know if they are a replica), so they can wait a little bit to see if their write made it through. Conceptually this is same as when we are waiting for ACK about write from the cluster, just a different channel is used.

More involved conflict resolution could extend this approach, increasing system throughput at the cost of complexity: "except if events change different entitites and have no chance of breaking invariants".
 

Summary

To summarise. It is possible to implement event sourcing using Apache Kafka as a transport if the following requirements are met:

  • All events for a single aggregate are published to a single topic

  • We use Kafka Transactions API or CRDT approach to enforce aggregate invariants.

  • We steer to have large aggregate streams that are partitioned by tenants.

 

While this approach is feasible in theory (and was implemented in practice), there are inherent roadblocks in reality, especially in larger organisations.

Kafka software ecosystem favours having one message schema per topic. While it is possible to use Unions or schema references in Kafka topics, this is a recent development with a limited support.

Historically Kafka is favoured by large enterprises. Such companies tend to have strong departmental division with different teams being responsible for different modules, micro-services and corresponding message types. It will be unusual to have multiple teams develop a product that puts different message schemas in a single aggregate and topic, while treating it as a source of truth.

The last point is the important one - it is trivial to join multiple messages in a single topic.

Rinat AbdullinRinat AbdullinBlog
Blog

Consistency and Aggregates in Event Sourcing

Learn how we ensures data consistency in event sourcing with effective use of aggregates, enhancing system reliability and performance.

Christian FolieBlog
Blog

The Power of Event Sourcing

This is how we used Event Sourcing to maintain flexibility, handle changes, and ensure efficient error resolution in application development.

Rinat AbdullinRinat AbdullinBlog
Blog

Using NLP libraries for post-processing

Learn how to analyse sticky notes in miro from event stormings and how this analysis can be carried out with the help of the spaCy library.

Daniel WellerBlog
Blog

Revolutionizing the Logistics Industry

As the logistics industry becomes increasingly complex, businesses need innovative solutions to manage the challenges of supply chain management, trucking, and delivery. With competitors investing in cutting-edge research and development, it is vital for companies to stay ahead of the curve and embrace the latest technologies to remain competitive. That is why we introduce the TIMETOACT Logistics Simulator Framework, a revolutionary tool for creating a digital twin of your logistics operation.

Aqeel AlazreeBlog
Blog

Part 2: Data Analysis with powerful Python

Analyzing and visualizing data from a SQLite database in Python can be a powerful way to gain insights and present your findings. In Part 2 of this blog series, we will walk you through the steps to retrieve data from a SQLite database file named gold.db and display it in the form of a chart using Python. We'll use some essential tools and libraries for this task.

Sebastian BelczykBlog
Blog

Composite UI with Design System and Micro Frontends

Discover how to create scalable composite UIs using design systems and micro-frontends. Enhance consistency and agility in your development process.

Ian RusselIan RusselBlog
Blog

Introduction to Functional Programming in F# – Part 5

Master F# asynchronous workflows and parallelism. Enhance application performance with advanced functional programming techniques.

Daniel PuchnerBlog
Blog

Make Your Value Stream Visible Through Structured Logging

Boost your value stream visibility with structured logging. Improve traceability and streamline processes in your software development lifecycle.

Ian RusselIan RusselBlog
Blog

Introduction to Functional Programming in F# – Part 9

Explore Active Patterns and Computation Expressions in F#. Enhance code clarity and functionality with these advanced techniques.

Ian RusselIan RusselBlog
Blog

Introduction to Functional Programming in F# – Part 6

Learn error handling in F# with option types. Improve code reliability using F#'s powerful error-handling techniques.

Ian RusselIan RusselBlog
Blog

Introduction to Functional Programming in F#

Dive into functional programming with F# in our introductory series. Learn how to solve real business problems using F#'s functional programming features. This first part covers setting up your environment, basic F# syntax, and implementing a simple use case. Perfect for developers looking to enhance their skills in functional programming.

Ian RusselIan RusselBlog
Blog

Introduction to Functional Programming in F# – Part 7

Explore LINQ and query expressions in F#. Simplify data manipulation and enhance your functional programming skills with this guide.

Ian RusselIan RusselBlog
Blog

Introduction to Functional Programming in F# – Part 11

Learn type inference and generic functions in F#. Boost efficiency and flexibility in your code with these essential programming concepts.

Ian RusselIan RusselBlog
Blog

Introduction to Functional Programming in F# – Part 8

Discover Units of Measure and Type Providers in F#. Enhance data management and type safety in your applications with these powerful tools.

Ian RusselIan RusselBlog
Blog

Introduction to Functional Programming in F# – Part 2

Explore functions, types, and modules in F#. Enhance your skills with practical examples and insights in this detailed guide.

Ian RusselIan RusselBlog
Blog

Introduction to Functional Programming in F# – Part 10

Discover Agents and Mailboxes in F#. Build responsive applications using these powerful concurrency tools in functional programming.

Blog
Blog

My Workflows During the Quarantine

The current situation has deeply affected our daily lives. However, in retrospect, it had a surprisingly small impact on how we get work done at TIMETOACT GROUP Austria.

Bernhard SchauerBlog
Blog

ADRs as a Tool to Build Empowered Teams

Learn how we use Architecture Decision Records (ADRs) to build empowered, autonomous teams, enhancing decision-making and collaboration.

Ian RusselIan RusselBlog
Blog

Introduction to Functional Programming in F# – Part 4

Unlock F# collections and pipelines. Manage data efficiently and streamline your functional programming workflow with these powerful tools.

Peter SzarvasPeter SzarvasBlog
Blog

Why Was Our Project Successful: Coincidence or Blueprint?

“The project exceeded all expectations,” is one among our favourite samples of the very positive feedback from our client. Here's how we did it!

Ian RusselIan RusselBlog
Blog

So, I wrote a book

Join me as I share the story of writing a book on F#. Discover the challenges, insights, and triumphs along the way.

Daniel PuchnerBlog
Blog

How to gather data from Miro

Learn how to gather data from Miro boards with this step-by-step guide. Streamline your data collection for deeper insights.

Jonathan ChannonBlog
Blog

Tracing IO in .NET Core

Learn how we leverage OpenTelemetry for efficient tracing of IO operations in .NET Core applications, enhancing performance and monitoring.

Ian RusselIan RusselBlog
Blog

Introduction to Functional Programming in F# – Part 3

Dive into F# data structures and pattern matching. Simplify code and enhance functionality with these powerful features.

Ian RusselIan RusselBlog
Blog

Introduction to Functional Programming in F# – Part 12

Explore reflection and meta-programming in F#. Learn how to dynamically manipulate code and enhance flexibility with advanced techniques.

Sebastian BelczykBlog
Blog

Building a micro frontend consuming a design system | Part 3

In this blopgpost, you will learn how to create a react application that consumes a design system.

Sebastian BelczykBlog
Blog

Building A Shell Application for Micro Frontends | Part 4

We already have a design system, several micro frontends consuming this design system, and now we need a shell application that imports micro frontends and displays them.

Nina DemuthBlog
Blog

Trustbit ML Lab Welcomes Grayskull e150 by Tenstorrent

Discover how Trustbit ML Lab integrates Tenstorrent's Grayskull e150, led by Jim Keller, for cutting-edge, energy-efficient AI processing.

Chrystal LantnikBlog
Blog

CSS :has() & Responsive Design

In my journey to tackle a responsive layout problem, I stumbled upon the remarkable benefits of the :has() pseudo-class. Initially, I attempted various other methods to resolve the issue, but ultimately, embracing the power of :has() proved to be the optimal solution. This blog explores my experience and highlights the advantages of utilizing the :has() pseudo-class in achieving flexible layouts.

Sebastian BelczykBlog
Blog

Building and Publishing Design Systems | Part 2

Learn how to build and publish design systems effectively. Discover best practices for creating reusable components and enhancing UI consistency.

Daniel PuchnerBlog
Blog

How we discover and organise domains in an existing product

Software companies and consultants like to flex their Domain Driven Design (DDD) muscles by throwing around terms like Domain, Subdomain and Bounded Context. But what lies behind these buzzwords, and how these apply to customers' diverse environments and needs, are often not as clear. As it turns out it takes a collaborative effort between stakeholders and development team(s) over a longer period of time on a regular basis to get them right.

Rinat AbdullinRinat AbdullinBlog
Blog

State of Fast Feedback in Data Science Projects

DSML projects can be quite different from the software projects: a lot of R&D in a rapidly evolving landscape, working with data, distributions and probabilities instead of code. However, there is one thing in common: iterative development process matters a lot.

Rinat AbdullinRinat AbdullinBlog
Blog

So You are Building an AI Assistant?

So you are building an AI assistant for the business? This is a popular topic in the companies these days. Everybody seems to be doing that. While running AI Research in the last months, I have discovered that many companies in the USA and Europe are building some sort of AI assistant these days, mostly around enterprise workflow automation and knowledge bases. There are common patterns in how such projects work most of the time. So let me tell you a story...

Aqeel AlazreeBlog
Blog

Part 1: Data Analysis with ChatGPT

In this new blog series we will give you an overview of how to analyze and visualize data, create code manually and how to make ChatGPT work effectively. Part 1 deals with the following: In the data-driven era, businesses and organizations are constantly seeking ways to extract meaningful insights from their data. One powerful tool that can facilitate this process is ChatGPT, a state-of-the-art natural language processing model developed by OpenAI. In Part 1 pf this blog, we'll explore the proper usage of data analysis with ChatGPT and how it can help you make the most of your data.

Felix KrauseBlog
Blog

Creating a Cross-Domain Capable ML Pipeline

As classifying images into categories is a ubiquitous task occurring in various domains, a need for a machine learning pipeline which can accommodate for new categories is easy to justify. In particular, common general requirements are to filter out low-quality (blurred, low contrast etc.) images, and to speed up the learning of new categories if image quality is sufficient. In this blog post we compare several image classification models from the transfer learning perspective.

TIMETOACT GROUP
Kompetenz
Kompetenz

Sourcing Strategy, Spend Management & Compliance

Despite increasingly powerful IT, IT costs are rising steadily in relation to revenue / EBITDA. IT operations are causing more and more costs, and the budget for innovations in digital transformation is suffering as a result. Shifting the balance between operations and innovation is a strategic task.

Rinat AbdullinRinat AbdullinBlog
Blog

LLM Performance Series: Batching

Beginning with the September Trustbit LLM Benchmarks, we are now giving particular focus to a range of enterprise workloads. These encompass the kinds of tasks associated with Large Language Models that are frequently encountered in the context of large-scale business digitalization.

Rinat AbdullinRinat AbdullinBlog
Blog

Process Pipelines

Discover how process pipelines break down complex tasks into manageable steps, optimizing workflows and improving efficiency using Kanban boards.

TIMETOACT GROUP
Branche
Headerbild für lokale Entwicklerressourcen in Deutschland
Branche

On-site digitization partner for insurance companies

As TIMETOACT GROUP, we are one of the leading digitization partners for IT solutions in Germany, Austria and Switzerland. As your partner, we are there for you at 17 locations and will find the right solution on the path to digitization - gladly together in a personal exchange on site.

Laura GaetanoBlog
Blog

5 lessons from running a (remote) design systems book club

Last year I gifted a design systems book I had been reading to a friend and she suggested starting a mini book club so that she’d have some accountability to finish reading the book. I took her up on the offer and so in late spring, our design systems book club was born. But how can you make the meetings fun and engaging even though you're physically separated? Here are a couple of things I learned from running my very first remote book club with my friend!

Matus ZilinskyBlog
Blog

Creating a Social Media Posts Generator Website with ChatGPT

Using the GPT-3-turbo and DALL-E models in Node.js to create a social post generator for a fictional product can be really helpful. The author uses ChatGPT to create an API that utilizes the openai library for Node.js., a Vue component with an input for the title and message of the post. This article provides step-by-step instructions for setting up the project and includes links to the code repository.

Aqeel AlazreeBlog
Blog

Part 3: How to Analyze a Database File with GPT-3.5

In this blog, we'll explore the proper usage of data analysis with ChatGPT and how you can analyze and visualize data from a SQLite database to help you make the most of your data.

Rinat AbdullinRinat AbdullinBlog
Blog

Trustbit LLM Leaderboard

To address common questions concerning the integration of Large Language Models, we have created an LLM Product Leaderboard that focuses on building and shipping products.

Rinat AbdullinRinat AbdullinBlog
Blog

Let's build an Enterprise AI Assistant

In the previous blog post we have talked about basic principles of building AI assistants. Let’s take them for a spin with a product case that we’ve worked on: using AI to support enterprise sales pipelines.

TIMETOACT
Service
Header zu Requirement Engineering
Service

Requirement Engineering

Requirement Engineering, also known as Requirements Analysis, is a central component of the Software Development process. In this process, the requirements for the system to be developed are defined using a systematic procedure.

Christoph HasenzaglChristoph HasenzaglBlog
Blog

Common Mistakes in the Development of AI Assistants

How fortunate that people make mistakes: because we can learn from them and improve. We have closely observed how companies around the world have implemented AI assistants in recent months and have, unfortunately, often seen them fail. We would like to share with you how these failures occurred and what can be learned from them for future projects: So that AI assistants can be implemented more successfully in the future!

TIMETOACT
Referenz
Referenz

Smarter mobility with the portal switchh

Subway, S-Bahn, bus, car, ferry or bicycle: The pilot project "switchh" of HOCHBAHN in cooperation with Europcar and Car2Go makes Hamburg mobile.

Nina DemuthBlog
Blog

7 Positive effects of visualizing the interests of your team

Interests maps unleash hidden potentials and interests, but they also make it clear which topics are not of interest to your colleagues.

Christian FolieBlog
Blog

Designing and Running a Workshop series: An outline

Learn how to design and execute impactful workshops. Discover tips, strategies, and a step-by-step outline for a successful workshop series.

Laura GaetanoBlog
Blog

My Weekly Shutdown Routine

Discover my weekly shutdown routine to enhance productivity and start each week fresh. Learn effective techniques for reflection and organization.

Rinat AbdullinRinat AbdullinBlog
Blog

The Intersection of AI and Voice Manipulation

The advent of Artificial Intelligence (AI) in text-to-speech (TTS) technologies has revolutionized the way we interact with written content. Natural Readers, standing at the forefront of this innovation, offers a comprehensive suite of features designed to cater to a broad spectrum of needs, from personal leisure to educational support and commercial use. As we delve into the capabilities of Natural Readers, it's crucial to explore both the advantages it brings to the table and the ethical considerations surrounding voice manipulation in TTS technologies.

Felix KrauseBlog
Blog

Part 1: Detecting Truck Parking Lots on Satellite Images

Real-time truck tracking is crucial in logistics: to enable accurate planning and provide reliable estimation of delivery times, operators build detailed profiles of loading stations, providing expected durations of truck loading and unloading, as well as resting times. Yet, how to derive an exact truck status based on mere GPS signals?

Rinat AbdullinRinat AbdullinBlog
Blog

Innovation Incubator Round 1

Team experiments with new technologies and collaborative problem-solving: This was our first round of the Innovation Incubator.

Rinat AbdullinRinat AbdullinBlog
Blog

Innovation Incubator at TIMETOACT GROUP Austria

Discover how our Innovation Incubator empowers teams to innovate with collaborative, week-long experiments, driving company-wide creativity and progress.

Felix KrauseBlog
Blog

Boosting speed of scikit-learn regression algorithms

The purpose of this blog post is to investigate the performance and prediction speed behavior of popular regression algorithms, i.e. models that predict numerical values based on a set of input variables.

Christian FolieBlog
Blog

Designing and Running a Workshop series: The board

In this part, we discuss the basic design of the Miro board, which will aid in conducting the workshops.

Rinat AbdullinRinat AbdullinBlog
Blog

Strategic Impact of Large Language Models

This blog discusses the rapid advancements in large language models, particularly highlighting the impact of OpenAI's GPT models.

Rinat AbdullinRinat AbdullinBlog
Blog

5 Inconvenient Questions when hiring an AI company

This article discusses five questions you should ask when buying an AI. These questions are inconvenient for providers of AI products, but they are necessary to ensure that you are getting the best product for your needs. The article also discusses the importance of testing the AI system on your own data to see how it performs.

Laura GaetanoBlog
Blog

Using a Skill/Will matrix for personal career development

Discover how a Skill/Will Matrix helps employees identify strengths and areas for growth, boosting personal and professional development.

Felix KrauseBlog
Blog

License Plate Detection for Precise Car Distance Estimation

When it comes to advanced driver-assistance systems or self-driving cars, one needs to find a way of estimating the distance to other vehicles on the road.

Rinat AbdullinRinat AbdullinBlog
Blog

Learning + Sharing at TIMETOACT GROUP Austria

Discover how we fosters continuous learning and sharing among employees, encouraging growth and collaboration through dedicated time for skill development.

Rinat AbdullinRinat AbdullinBlog
Blog

Machine Learning Pipelines

In this first part, we explain the basics of machine learning pipelines and showcase what they could look like in simple form. Learn about the differences between software development and machine learning as well as which common problems you can tackle with them.

Felix KrauseBlog
Blog

Part 2: Detecting Truck Parking Lots on Satellite Images

In the previous blog post, we created an already pretty powerful image segmentation model in order to detect the shape of truck parking lots on satellite images. However, we will now try to run the code on new hardware and get even better as well as more robust results.

Christian FolieBlog
Blog

Running Hybrid Workshops

When modernizing or building systems, one major challenge is finding out what to build. In Pre-Covid times on-site workshops were a main source to get an idea about ‘the right thing’. But during Covid everybody got used to working remotely, so now the question can be raised: Is it still worth having on-site, physical workshops?

Rinat AbdullinRinat AbdullinBlog
Blog

Announcing Domain-Driven Design Exercises

Interested in Domain Dirven Design? Then this DDD exercise is perfect for you!

Aqeel AlazreeBlog
Blog

Database Analysis Report

This report comprehensively analyzes the auto parts sales database. The primary focus is understanding sales trends, identifying high-performing products, Analyzing the most profitable products for the upcoming quarter, and evaluating inventory management efficiency.

Aqeel AlazreeBlog
Blog

Part 4: Save Time and Analyze the Database File

ChatGPT-4 enables you to analyze database contents with just two simple steps (copy and paste), facilitating well-informed decision-making.

TIMETOACT
Technologie
Technologie

Our service offer for Mendix

The Dutch software manufacturer gives us the possibilities to create platform-independent low/no-code solutions for you with its products. In addition, we offer a wide range of services related to Mendix and are available to you from conceptual design to hosting and operation of your new solution.

TIMETOACT
Technologie
Headerbild zu Containerisierung mit Open Source
Technologie

Containerisation with Open Source

Containerization is the next stage of virtualization and provides secure and easy compartmentalization of individual applications. The process of deploying an app has been simplified many times in recent years.

Jörg EgretzbergerJörg EgretzbergerBlog
Blog

8 tips for developing AI assistants

AI assistants for businesses are hype, and many teams were already eagerly and enthusiastically working on their implementation. Unfortunately, however, we have seen that many teams we have observed in Europe and the US have failed at the task. Read about our 8 most valuable tips, so that you will succeed.

TIMETOACT
Referenz
Referenz

The digital customer file with IBM Content Manager

The prefabricated house specialist SchwörerHaus KG has relied on IBM technology for many years to set up a digital customer file.

TIMETOACT GROUP
News
News

Proof-of-Value Workshop

Today's businesses need data integration solutions that offer open, reusable standards and a complete, innovative portfolio of data capabilities. Apply for one of our free workshops!

News
Nachbericht Atlassian Team' 23: Neue große Produktankündigungen - Atlassian Intelligence, Atlassian Confluence Whiteboards oder Atlassian Beacon
News

Highlights & Impressions: Follow-up to Atlassian Team'23

The ultimate event for modern teamwork is over - Atlassian Team' 23 took place from April 18 to 20 in Las Vegas. No matter if live on site or online, for the participants there were great new product announcements - first and foremost Atlassian Intelligence, Confluence Whiteboards or Beacon - exciting insights and conversations and a lot of personal exchange.

TIMETOACT
Service
IT Vendor Management Teaserbild
Service

IT Sourcing & Vendor Management – Managing IT Partners

We ensure the application of best practice methods for the selection and evaluation of IT vendors and service providers and for cost-optimized supplier management.

Leistung
Leistung

Working with CLOUDPILOTS

With the perfect partner at your side, many things are easier and more efficient, although a direct relationship with the manufacturer is appealing at first glance. But only at first glance!

TIMETOACT
Blog
Blog

TIMETOACT starts with the implementation of ESG-Suit

Compliance with ESG and sustainability standards is mandatory for companies in order to meet the requirements of the EU's Corporate Sustainability Reporting Directive (CSRD).

Kompetenz
Kompetenz

Software Engineering

Software engineering is the art of creating good software. Software engineering is the discipline of creating your digital products using methods proven by science and practice.

TIMETOACT
Referenz
Referenz

Interactive online portal identifies suitable employees

TIMETOACT digitizes several test procedures for KI.TEST to determine professional intelligence and personality.

TIMETOACT GROUP
Branche
Branche

Digital transformation in public administration

The digital transformation will massively change the world of work, especially in public administration. We support federal, state and local authorities in the strategic and technical implementation of their administrative modernisation projects.

TIMETOACT
News
News

TIMETOACT is Mendix training partner

We are convinced of Mendix's no-code/low-code platforms and are therefore not only a Mendix partner, but also a Mendix training partner.

TIMETOACT
Technologie
Logo Atlassian Confluence
Technologie

Confluence from Atlassian

Create, organize, and collaborate on tasks - all in a single place. Confluence is a workspace for teams and organizations where you can store your documentation and collaboratively develop and share knowledge. Dynamic pages give your team a place to create, capture, and collaborate around projects or idea development.

TIMETOACT
Technologie
Logo Jira Service Management
Technologie

Jira Service Management from Atlassian

Enable developers, operators, and other teams from different departments to collaborate and improve their service management responsiveness. Respond quickly to incidents, requests, and changes, and provide your customers with an optimized service experience.

TIMETOACT
Service
Header zu Fullstack Development
Service

Fullstack Development

The trend in Software Development is towards Full-Stack Development. Full-stack developers are programmers who work in both frontend and backend development and thus have competencies in the areas of databases, servers, systems and clients.

TIMETOACT
Service
Headerbild zu Application Modernization
Service

Application Modernization

Application Modernization focuses on modernizing existing applications. The key to success in Application Modernization is the strategy and selection of projects.

TIMETOACT GROUP
Branche
Schild als Symbol für innere und äußere Sicherheit
Branche

Internal and external security

Defense forces and police must protect citizens and the state from ever new threats. Modern IT & software solutions support them in this task.

TIMETOACT
Blog
Blog

TIMETOACT starts with the implementation of ESG-Suit

Compliance with ESG and sustainability standards is mandatory for companies in order to meet the requirements of the EU's Corporate Sustainability Reporting Directive (CSRD).

Kompetenz
Kompetenz

AI - A technology is revolutionizing our everyday lives

For ARS, AI is an increasingly natural and organic part of software engineering. This is particularly true in cases where it is an integral part of applications and functions.

TIMETOACT
News
News

HCL license and price adjustments as of 8.8.2024

HCL Software has informed us that the license and maintenance prices for the various product categories will be increased as of 8.8.2024.

Service
Service

Application Integration & Process Automation

Digitizing and improving business processes and responding agilely to change – more and more companies are facing these kind of challenges. This makes it all the more important to take new business opportunities through integrated and optimized processes based on intelligent, digitally networked systems.

TIMETOACT
Technologie
Technologie

Pimcore als Open Source Software Platform

Pimcore is a free-to-use open source software platform and is already used effectively in many corporate fields. With Pimcore, your classic day-to-day business problems with portals and websites are easily solved as we tailor the platform to your specific business requirements.

Service
Service

Cloud Transformation & Container Technologies

Public, private or hybrid? We can help you develop your cloud strategy so you can take full advantage of the technology.

News
News

Atlassian publishes largest release for Confluence

Atlassian announces a series of new functions for Confluence. There is also a Confluence Mobile App and - last but not least - Confluence is now also on YouTube.

Referenz
Referenz

Interdisciplinary collaboration at C&A with Atlassian

To homogenize the toolchain TIMETOACT replaced two independent ticketing systems for C&A Services GmbH &Co. with the Atlassian product range. With this step into the enterprise cloud, the fashion retailer is putting is putting an exclamation mark on cross-departmental and cross-location digital collaboration.

TIMETOACT GROUP
Service
Headerbild zur offenen und sicheren IT bei Versicherungen
Service

Open and secure IT

Just a few years ago, insurers were reluctant to move into the cloud or platform world. Concerns about security and governance often prevailed. The paradigm has changed. Insurers are looking at cloud solutions and are increasingly willing to rely on standard solutions for core applications as well.

TIMETOACT
Technologie
Headerbild zu Webserver mit Open Source
Technologie

Web server with Open Source

Web servers provide their application with the gateway to the world: this is where requests for data for a complex web app and resources for a website go in and out.

Referenz
Referenz

Inventory management with Jira and Confluence from Atlassian

The catworkx approach for lifecycle management of IT inventory: The lifecycle of the inventory is modeled as a specific Jira workflow and various inventory categories are mapped and managed as task types. Confluence is perfectly suited for the documentation.