Skip to main content

68. Data-Driven Scheduling from Entities

Status: Accepted Date: 2025-07-06

Context

We need a pattern for how and when to trigger our centralized scheduling logic. We could have services explicitly call the scheduler after every relevant action, but this can lead to scattered calls and makes it easy to forget a step. A more robust pattern is to tie the scheduling logic directly to the lifecycle of our core data entities. For example, the creation of a new Order in the database should automatically trigger the scheduling of its future lifecycle events.

Decision

We will implement a Data-Driven Scheduling pattern using our ORM, MikroORM. The scheduling logic will be triggered by the lifecycle events of our database entities.

Specifically, we will use MikroORM's lifecycle hooks (e.g., @AfterCreate, @AfterUpdate) or, preferably, its event subscriber system. When an entity of interest (like an Order or Position) is created or updated, the corresponding event listener will be triggered. This listener will then be responsible for calling the appropriate method on the centralized Schedulers module to create, update, or cancel scheduled jobs related to that entity.

For example:

  1. An Order entity is created and persisted.
  2. The afterCreate event listener for Order fires.
  3. The listener calls orderSchedulerService.scheduleFillCheck(newOrder).
  4. The OrderSchedulerService then enqueues the necessary delayed jobs in BullMQ.

Consequences

Positive:

  • Co-location of Logic: The logic for what to schedule is located right next to the entity definition or in a dedicated subscriber, making the relationship between the data and its scheduled lifecycle events explicit and easy to find.
  • Reliability & Consistency: This pattern makes it much harder to "forget" to schedule tasks. If the entity is saved to the database, its related jobs will be scheduled. This ensures consistency.
  • Transactional Integrity: By using the ORM's event system (especially afterCommit hooks if available), we can ensure that we only schedule jobs for data that has been successfully committed to the database, preventing race conditions.
  • Decoupling: The core business logic that creates the order doesn't need to know about scheduling. It just creates and saves the order. The scheduling is a separate, cross-cutting concern handled by the event system.

Negative:

  • "Magic" Behavior: The use of lifecycle hooks or event subscribers can sometimes feel like "magic" to developers who are not aware of them. It's not immediately obvious from the service code that created the order that a scheduling event was also triggered.
  • Testing Complexity: Unit testing services can become more complex, as you might need to mock or trigger the ORM's event system to test the full behavior.
  • Performance: Firing events on every entity save adds a small amount of overhead, though this is usually negligible.

Mitigation:

  • Clear Documentation: This architectural pattern will be clearly documented, and developers will be trained to look for entity subscribers as the source of scheduling logic.
  • Favor Subscribers over Hooks: We will favor using MikroORM's global EventSubscriber system over decorators (@AfterCreate) directly on entities. Subscribers are more explicit, easier to test in isolation, and keep the entity files themselves cleaner.
  • Integration Testing: We will rely on integration tests (which spin up a real database connection) to validate the end-to-end flow, from entity creation to job scheduling.