Nucleus Framework

Production-ready microservices foundation for Java / Spring Boot

Nucleus is a modular platform layer that provides the infrastructure every enterprise microservices application needs -- authentication, audit trails, messaging, monitoring, email, storage, and more. Add it as a Maven dependency and start building business logic on day one.

Quick Start

1. Add the parent POM

<parent>
    <groupId>com.nucleus</groupId>
    <artifactId>nucleus-parent</artifactId>
    <version>0.0.1</version>
</parent>

2. Pick your modules

<dependencies>
    <dependency>
        <groupId>com.nucleus</groupId>
        <artifactId>nucleus-core</artifactId>
        <version>0.0.1</version>
    </dependency>
    <dependency>
        <groupId>com.nucleus</groupId>
        <artifactId>nucleus-audit-core</artifactId>
        <version>0.0.1</version>
    </dependency>
</dependencies>

3. Configure

spring:
  config:
    import: configserver:https://your-config-server
  application:
    name: my-service
  profiles:
    active: monitoring

Architecture

Nucleus platform — libraries, runtime services, and infrastructure
flowchart TB
  subgraph NucleusApps["Nucleus runtime services"]
    direction LR
    NM["nucleus-monitoring
(deployed as a monitoring service)"] NSP["nucleus-system-property
(deployed as a config service)"] NAuth["nucleus-authentication
(deployed as an OAuth2 server)"] NMail["nucleus-mail
(deployed as a mail service)"] end subgraph Libs["Nucleus libraries"] direction LR NC["nucleus-core
JWT · AI · workflow"] NMC["nucleus-monitoring-client
heartbeat · CPU · GC · threads"] NCon["nucleus-connectors
@NucleusListener / @NucleusPublish"] NAC["nucleus-audit-core
@AuditAction"] NSt["nucleus-storage
MinIO wrapper"] end subgraph Infra["Infrastructure"] direction LR K[("Kafka")] M[("MinIO")] DB[("MySQL / Postgres")] CS["config-server"] end Libs --> NucleusApps NucleusApps --> Infra CS -.->|serves config| NucleusApps
nucleus-parent (Maven parent POM)
  nucleus-bom (Bill of Materials)

nucleus-core (foundation: security, JWT, pagination, AI, workflow)
  nucleus-connectors (@NucleusListener — 13 middleware backends)
    nucleus-connectors-admin (runtime provisioning, REST + UI)
    nucleus-monitoring-client (self-registration, remote logging)
  nucleus-audit-core (@AuditAction declarative audit)
  nucleus-storage (MinIO/S3 file storage)
  nucleus-execution-events (async job framework)
  nucleus-authentication (OAuth2/OIDC server)

nucleus-mail-common (shared DTOs)
  nucleus-mail (rendering + sending + step tracking)
  nucleus-mail-renderer (Thymeleaf template engine)
  nucleus-mail-sender (SMTP delivery)

nucleus-ui-message-broker (SSE with role-based delivery)
nucleus-config (Spring Cloud Config Server)
nucleus-user (user management + roles)
nucleus-address (Google Maps integration)

nucleus-core

Foundation

The base layer every Nucleus service extends. Provides security filters, JWT validation, pagination, error handling, and report generation.

ControllerErrorHandler

A single @RestControllerAdvice in nucleus-core maps every domain exception to the right HTTP status + body, so services never have to write their own try/catch wrappers at the controller layer. Throw the right exception, get the right response.

ExceptionHTTPBodyTypical use
ResourceNotFoundException404Message surfacedEntity lookup miss
DuplicateResourceException409Message surfacedUnique-constraint violations
InvalidResourceStateException409Message surfacedIllegal state transitions (e.g. restore of already-restored recording)
IllegalStateException409Message surfacedBusiness-rule conflicts (e.g. overlapping schedule windows)
IllegalArgumentException400Message surfacedCaller-side invalid input
HttpMessageNotReadableException400TerseMalformed request JSON
AccessDeniedException403TerseSpring Security denials
Exception (fallback)500TerseUnhandled failures — logged in full

Guideline: throw IllegalStateException for business-rule conflicts that aren't duplicates, IllegalArgumentException for bad input the controller didn't catch, and the Nucleus-specific exceptions when you want a domain-level error surfaced verbatim to the client.

nucleus-core · AI Framework

AI

Provider-agnostic LLM client. Callers build a provider-agnostic AiRequest and hand it to AiClient; the client routes to the configured provider and returns a strongly-typed AiResult<R>. No exceptions leave execute.

AiRequest request = AiRequest.builder()
    .provider(AiProvider.ANTHROPIC)
    .model("claude-3-5-sonnet")
    .systemPrompt("You are a transaction categorizer. Respond with JSON only.")
    .userPrompt(description)
    .temperature(0.2)
    .build();

AiResult<CategoryResponse> result = aiClient.execute(request, CategoryResponse.class);

if (result.isSuccess()) {
    return result.getData();
}
// result.getErrorCode() is one of: RATE_LIMIT, HTTP_ERROR, TIMEOUT,
//   PARSING_ERROR, INVALID_RESPONSE, UNKNOWN

Providers

Features

nucleus-core · Workflow + Step Executors

Processing

Nucleus-standard way to model multi-step async business flows. Each step runs on its own Kafka topic (horizontal scaling per stage), has three outcomes — success, not-handled (delegate to next step), technical failure (abort) — and the orchestrator stays declarative.

Workflow step chain — three outcomes per step, Kafka-dispatched
flowchart LR
  Start(["@WorkflowStart
startWorkflow(req)"]) --> S1["@WorkflowStep
onRuleEngine"] S1 -->|success| OK1(["publishSuccess
workflow.succeed()"]) S1 -->|notHandled
delegate| S2["@WorkflowStep
onClientLevel"] S2 -->|success| OK2(["publishSuccess"]) S2 -->|notHandled
delegate| S3["@WorkflowStep
onBankLevel"] S3 -->|success| OK3(["publishSuccess"]) S3 -->|notHandled
delegate| S4["@WorkflowStep
onAIClassification"] S4 -->|success| OK4(["publishSuccess"]) S1 -.->|technicalFailed| Abort(["workflow.failed
ErrorCode + reason"]) S2 -.->|technicalFailed| Abort S3 -.->|technicalFailed| Abort S4 -.->|technicalFailed| Abort S4 -->|notHandled| Manual(["manual review queue"])
@WorkflowStart
public void startWorkflow(Request req) {
    workflow.start();
    workflow.dispatchRuleEngineClassification(req);
}

@WorkflowStep(mode = WorkflowStepMode.QUEUE)
public void onRuleEngine(RuleEngineRequest req) {
    var result = ruleStep.handle(req);
    if (result.isSuccess()) {
        resultService.publishSuccess(req, result.response);
        workflow.succeed();
    } else if (result.isNotHandled()) {
        workflow.dispatchAIClassification(req.asAI());
    } else {
        workflow.failed(ErrorCodes.TECHNICAL_FAILURE, result.message);
    }
}

nucleus-authentication

Security

Full OAuth2/OIDC authorization server built on Spring Authorization Server.

nucleus-audit-core

Compliance

Add audit trails to any service method with a single annotation.

@AuditAction(
    action = "Update Expense",
    entityName = "Expense",
    description = "Updated expense #{#result.id} for #{#result.vendor}"
)
public Expense updateExpense(ExpenseDTO dto) { ... }

nucleus-connectors

Messaging

Middleware-agnostic messaging framework. One @NucleusListener annotation works against 13 different brokers. Every vendor adapter conforms to the same SPI — a developer writing a listener doesn't know or care what's on the other end of the wire.

One annotation, 13 backends — @NucleusListener / @NucleusPublish abstraction
flowchart LR
  subgraph AppCode["Application code"]
    L["@NucleusListener
onOrder(Order o)"] P["@NucleusPublish
TopicPublisher"] end subgraph SPI["Nucleus connector SPI"] R["MessageRouter
format + serialization"] end subgraph Backends["13 backend adapters"] direction TB K["Apache Kafka
(+ MSK, Event Hubs)"] RMQ["RabbitMQ
(+ Amazon MQ)"] AWS["SQS · SNS · Kinesis"] GCP["Google Pub/Sub"] AZ["Azure Service Bus
Queue Storage"] JMS["JMS: ActiveMQ
Artemis · Solace · IBM MQ"] SO["Solace (native JCSMP)"] end L --> R P --> R R --> K R --> RMQ R --> AWS R --> GCP R --> AZ R --> JMS R --> SO
Multi-ecosystem Kafka namespacing — nucleus.* defaults, per-deployment overrides
flowchart TB
  subgraph Broker["Shared Kafka broker"]
    direction TB
    ACME_H["acme.service.health"]
    ACME_L["acme.remote.logs"]
    WIDGETS_H["widgets.service.health"]
    WIDGETS_L["widgets.remote.logs"]
    N_H["nucleus.service.health
(default, no override)"] end subgraph Deploy1["Product deployment A
nucleus.application=ACME"] D1["services"] D1Props["PROPERTIES override:
kafka.service.health.topic.name=
acme.service.health"] end subgraph Deploy2["Product deployment B
nucleus.application=WIDGETS"] D2["services"] D2Props["PROPERTIES override:
kafka.service.health.topic.name=
widgets.service.health"] end subgraph Nuc["Standalone Nucleus
nucleus.application=NUCLEUS (default)"] N1["services (no override)"] end Deploy1 --> ACME_H Deploy1 --> ACME_L Deploy2 --> WIDGETS_H Deploy2 --> WIDGETS_L Nuc --> N_H
@NucleusListener(
    connector = "orders",
    topic     = "${orders.topic}",
    format    = MessageFormat.JSON,
    mode      = ConsumeMode.QUEUE
)
public void onOrder(Order o) {
    // first parameter type drives deserialization — no manual JSON handling
}

Supported backends

CategoryBackendsModule
Apache KafkaKafka, Amazon MSK, Azure Event Hubs (Kafka-compatible)nucleus-connectors-kafka
AMQPRabbitMQ, Amazon MQ (RabbitMQ engine)nucleus-connectors-rabbitmq
AWSSQS, SNS, Kinesis Data Streamsnucleus-connectors-sqs / -sns / -kinesis
Google CloudPub/Subnucleus-connectors-pubsub
AzureService Bus, Queue Storagenucleus-connectors-servicebus / -azurequeue
JMS familyActiveMQ, Artemis, Solace (JMS), IBM MQnucleus-connectors-jms-*
Solace nativePubSub+ via JCSMP — topic wildcards, Direct messaging, replaynucleus-connectors-solace

Runtime provisioning

Connectors are addressable by logical name. Instances can be created at runtime through nucleus-connectors-admin — REST + UI flow to add a new broker, validate it, and start consuming without restarting the service. Ideal for multi-tenant deployments where each customer's cluster is onboarded on demand.

Vendor-specific features

Where a backend exposes something the generic SPI can't represent — Kafka partition keys, RabbitMQ exchanges, Solace topic hierarchies — opt-in meta-annotations ride alongside the core annotation:

@NucleusListener(connector = "orders", topic = "orders/>/ca/*", format = MessageFormat.JSON)
@Solace(deliveryMode = GUARANTEED, flowProfile = "high-throughput")
public void onOrder(Order o) { ... }

The core annotation stays generic; vendor extensions are additive, never parallel APIs.

Module selection

@NucleusPublish

Field-level annotation that injects a pre-configured TopicPublisher, eliminating publisher wrapper boilerplate. Mirrors @NucleusListener for the publish side.

Before (one class per topic):

@Component
@RequiredArgsConstructor
public class OrderPublisher implements DataPublisher<Order> {
    @Value("${orders.topic}") private final String topic;
    @MiddlewareQualifier(type = KAFKA)
    @MessageFormatQualifier(format = JSON)
    private final Publisher publisher;
    public void publish(Order o) { publisher.publish(topic, o, null); }
}

After (inline field):

@Component
public class OrderService {
    @NucleusPublish(topic = "${orders.topic}")
    private TopicPublisher orderPublisher;

    public void placeOrder(Order o) {
        // ... business logic ...
        orderPublisher.send(o);
    }
}
AttributeDefaultDescription
topicrequiredDestination. Supports ${placeholders}
connector"" (primary)Logical connector name — "kafka", "rabbitmq", etc.
formatJSONSerialization format
transactionaltrueUse Kafka transactions (set false for fire-and-forget)

TopicPublisher API:

MethodDescription
send(payload)Send to the pre-configured topic
send(payload, key)With partition key
send(payload, headers)With custom headers
send(payload, key, headers)Both key and headers

Lombok compatibility: The field must be non-final — it's injected by a BeanPostProcessor after construction. @RequiredArgsConstructor works fine for other final fields; the publisher field is simply excluded from the constructor.

nucleus-monitoring-client

Observability

Lightweight service monitoring. Add the dependency, enable the monitoring profile, and your service self-registers with health snapshots, remote logging, thread dumps, and GC events — all via Kafka push. The monitoring service aggregates the streams into a fleet registry, drives a Log Explorer with infinite scroll + a timeline playback viewmode, and powers the Session Recordings capture-and-replay feature.

Configuration Properties (PROPERTIES table)

Global (PROFILE=monitoring):

PropertyDefaultDescription
nucleus.monitoring.remote-logging.auto-attachtrueAuto-attach KafkaLogAppender to root logger at boot (no logback.xml needed)
nucleus.monitoring.remote-logging.allowedtrueGlobal default — whether remote logging is permitted
nucleus.monitoring.publish-interval-ms10000Health snapshot publish frequency (ms)
nucleus.monitoring.stale-check-ms30000Time before a service is considered stale
nucleus.monitoring.log-retention.days7Days before logs are archived to MinIO and purged from DB

Per-service (PROFILE=bookwise-{service-name}):

PropertyDefaultDescription
nucleus.monitoring.remote-logging.allowedtrueHard lock — false rejects ENABLE_LOGS, UI toggle greyed out
nucleus.monitoring.remote-logging.enabledfalseAuto-activate remote logging at boot
nucleus.monitoring.publish-interval-ms10000Override health publish frequency

Remote Logging Scenarios

Scenariologback.xmlenabledallowedBehaviour
A1REMOTE + ASYNCtruetrueStreams at boot
A2REMOTE + ASYNCfalsetrueInactive, UI-togglable
A4REMOTE + ASYNCanyfalseHard locked — commands rejected
BnonefalsetrueAuto-attached at runtime, UI-togglable

Logback Setup (scenarios A1/A2)

<appender name="REMOTE" class="com.nucleus.monitoring.client.KafkaLogAppender" />
<appender name="ASYNC_REMOTE" class="ch.qos.logback.classic.AsyncAppender">
    <appender-ref ref="REMOTE"/>
    <queueSize>4096</queueSize>
    <discardingThreshold>0</discardingThreshold>
    <neverBlock>true</neverBlock>
</appender>
<root level="INFO">
    <appender-ref ref="ASYNC_JSON"/>
    <appender-ref ref="ASYNC_REMOTE"/>
</root>

Instance-scoped "Keep across restarts"

The service-monitor UI has a Keep across restarts toggle on each control track (remote-logging, thread-stream, gc-logs). When it's on and the operator presses Enable, the state persists so the specific instance boots with the feature active.

Row shape written to PROPERTIES:

APPLICATIONPROFILELABELPARAM_KEYVALUE
BOOKWISEbookwise-transaction-service1nucleus.monitoring.instance.<id>.remote-logging.enabledtrue

Instance scoping is encoded in PARAM_KEY (prefix nucleus.monitoring.instance.<instanceId>.) rather than the LABEL column. Spring Cloud Config's JDBC backend only queries a single label per request, so a comma-separated label list (${instance-id},1) is treated as one literal and returns nothing. Keeping every row at LABEL='1' avoids that.

Read path at boot uses a nested @Value:

@Value("${nucleus.monitoring.instance.${nucleus.monitoring.instance-id:default}.remote-logging.enabled:${nucleus.monitoring.remote-logging.enabled:false}}")
private boolean enabledAtBoot;

The outer placeholder ${nucleus.monitoring.instance-id} is populated at bootstrap by InstanceIdEnvironmentPostProcessor (deterministic UUIDv3 of serviceName@host:port, overrideable via NUCLEUS_MONITORING_INSTANCE_ID). Resolution: instance row wins, falls back to the service-wide row at LABEL='1', then to the code default.

Write path: target service's EnableLogsHandler returns __persistKey/__persistValue on the ack; ServiceControlAckListener in nucleus-monitoring upserts the row through system-property-service's /upsert endpoint (allowed for ROLE_ADMIN or SCOPE_internal.access).

Data Persistence

DB TablePurposeConsumer Group
service_health_snapshotPer-instance health metrics: CPU load, heap used/max, non-heap, thread count, status, uptime, DB pool, disk; schedule_id tags rows inside a Session Recording window and drives the CPU + heap timeline trackmonitoring-health-persistence
remote_log_eventPersisted log events (tagged with schedule_id when captured inside a Session Recording window)monitoring-log-persistence
gc_eventGC pause events (tagged with schedule_id)monitoring-gc-persistence
thread_dump_snapshotThread dumps JSON (tagged with schedule_id)monitoring-thread-persistence

After log-retention.days, ambient data is purged from DB. Session Recording windows are archived to MinIO as hierarchical ZIP bundles (see Session Recordings below).

Architecture — pure microservice boundary

The monitoring service touches only its own tables via JPA. Anything owned by another service is reached over authenticated REST through a gateway. Concretely:

Gateway pattern — authenticated REST replaces cross-service DB access
flowchart LR
  subgraph Consumer["Consumer service (needs foreign data)"]
    direction TB
    Handler["Business handler"]
    Gateway["XxxGateway"]
    Token["ServiceTokenManager
cached client credentials"] Client["XxxControllerApi
generated WebClient"] Handler --> Gateway Gateway -->|getToken| Token Gateway --> Client end subgraph Target["Target service (owner of the data)"] direction TB API["REST endpoint"] Auth["JWT auth filter
ROLE_INTERNAL / ROLE_ADMIN"] Repo["JPA repository"] DB[("Own DB")] API --> Auth --> Repo --> DB end Forbidden["Forbidden path
no JdbcTemplate
no cross-service SQL
no foreign @Query"] Client -->|HTTPS + Bearer| API Handler -.-x Forbidden Forbidden -.-x DB style Gateway fill:#ede9fe,stroke:#7c3aed style Client fill:#dbeafe,stroke:#3b82f6 style DB fill:#dcfce7,stroke:#16a34a style Forbidden fill:#fee2e2,stroke:#dc2626,color:#991b1b

Session Recordings

A Session Recording is a time-bounded capture of logs + GC events + thread dumps + CPU/heap samples for a specific service instance, replayable as a synchronized timeline. Recordings can be scheduled ahead of time or started on the spot from the Service Monitor (Record Now), and archived to MinIO for long-term retention. The shared timeline has five rows (logs · GC · threads · CPU · heap) driven by a single cursor, plus a dedicated CPU & Heap tab with a SVG chart. Per-instance config for each recording (remote-logging enabled, levels, gc-logs, thread-stream interval) is persisted through the SystemPropertyGateway — nucleus never writes to another service's tables.

Session Recording data flow — capture, tag, replay, archive
flowchart LR
  subgraph Source["Monitored service (any nucleus-based service)"]
    direction TB
    App["Application code"]
    MC["nucleus-monitoring-client
• CpuSampler
• KafkaLogAppender
• GcEventStreamer
• ThreadStreamScheduler"] App -.->|logs / JMX / JVM events| MC end subgraph Kafka["Kafka topics
nucleus.* defaults · per-deployment override"] direction TB TH["service.health"] TL["remote.logs"] TG["service.gc-events"] TT["service.thread-stream"] end subgraph Mon["nucleus-monitoring (runtime service)"] direction TB Lst["Persistence listeners"] Rsv["ActiveScheduleResolver
tags schedule_id
if inside a window"] DB[("Own DB:
remote_log_event
gc_event
thread_dump_snapshot
service_health_snapshot")] Lst --> Rsv Rsv --> DB end subgraph UI["Session Recordings UI"] direction TB TL2["Shared timeline
5 density tracks"] Tabs["Logs · GC · Threads · CPU+Heap"] end Bundle[("MinIO
recordings/host/.../file.zip
+ file.meta.json")] MC -->|publish| Kafka Kafka -->|consume| Lst DB -->|"REST /monitor/schedules/*
density + paginated"| UI DB -.->|archive: detach schedule_id| Bundle Bundle -.->|restore: reinsert rows| DB

Each recording flows through three states:

  1. Live — rows live in remote_log_event / gc_event / thread_dump_snapshot tagged with the schedule's ID; replayed from DB.
  2. Archived — moved to MinIO as a ZIP bundle + .meta.json sidecar. DB rows are detached (schedule_id set to NULL) so they remain visible to the regular Log Explorer query for normal retention; MinIO is the durable copy of the recording.
  3. Permanently deleted — bundle removed from MinIO. Cannot be undone.

Capture features

FeatureDescription
ScopeLogs, GC events, thread dumps — any combination, toggleable per schedule
Time windowsStart/end time with timezone-aware scheduling; overnight windows supported (e.g. 23:00–05:00)
RecurrenceONCE / DAILY / WEEKDAYS / WEEKENDS
Level filteringRestrict to TRACE / DEBUG / INFO / WARN / ERROR (comma-separated)
Thread-dump intervalConfigurable 1s / 2s / 5s / 10s for streaming thread snapshots throughout the window
Notes / tagsFree-form text on the schedule (e.g. "investigating BOOK-289 slow login"); surfaced on the recording header, Log Explorer badge, and archive audit description
Overlap guardCreating or editing a schedule returns HTTP 409 if its window overlaps an existing schedule for the same service + instance
Window cap12 h server-side safety ceiling (capScheduleWindow); UI further caps open-ended Record Now at 4 h
Auto-deactivationOne-time schedules deactivate on window close; unified activation path always writes ACTIVATED + idempotent ENABLE_LOGS

Record Now

A one-click button on each Service Monitor detail panel starts a recording against the current service instance immediately — no schedule dialog. Presets: 5m / 15m / 1h / 2h / Until I stop. All presets prompt for an optional notes tag before submitting. An explicit Stop button and red pulsing outline persist across page refreshes (tracked in localStorage).

In-progress indicators

MinIO layout

recordings/{host}/{service}/{instance}/{yyyy}/{MM}/{dd}/{scheduleId}-{executionId}-{HHmmss}.zip
recordings/{host}/{service}/{instance}/{yyyy}/{MM}/{dd}/{scheduleId}-{executionId}-{HHmmss}.meta.json

ZIP contains manifest.json + logs.json + gc-events.json + thread-dumps.json. The sidecar .meta.json carries counts, capture flags, notes, and the bundleKey — used by the browser to render a preview panel without opening the ZIP.

UI — Administration → Logs → Session Recordings

Layout & navigation

Recording header

Shared timeline (above Logs / GC / Threads tabs)

Tabs

Recording actions

Log Explorer (Administration → Logs → Log Explorer)

Service Monitor (where Record Now lives)

Global + cross-page

REST API

MethodEndpointDescription
GET/monitor/schedulesList all schedules
GET/monitor/schedules/activeCurrently-active schedules across all services (powers the indicators)
POST/monitor/schedulesCreate schedule — 409 on overlap
PUT/monitor/schedules/{id}Update schedule — 409 on overlap
DELETE/monitor/schedules/{id}Stop & delete schedule
GET/monitor/schedules/{id}/logsPaginated logs for a recording
GET/monitor/schedules/{id}/gcPaginated GC events
GET/monitor/schedules/{id}/threadsPaginated thread dumps
GET/monitor/schedules/{id}/bundleStream the full recording as a ZIP
DELETE/monitor/schedules/{id}/executions/{execId}Archive — write bundle to MinIO, detach DB rows
GET/monitor/recordings/browse?prefix=Lazy one-level browse of archived bundles
GET/monitor/recordings/archived/download?key=Download archived ZIP
POST/monitor/recordings/archived/restore?key=Restore bundle back into DB
DELETE/monitor/recordings/archived?key=Permanently delete from MinIO

DB tables: remote_log_schedule (service_name, instance_id, start/end time, timezone, recurrence, levels, threads_interval_ms, notes, persistent flag, active flag) and remote_log_schedule_execution (one row per ACTIVATED / DEACTIVATED / SKIPPED event). All endpoints require ROLE_ADMIN and are audit-logged.

nucleus-ui-message-broker

Real-time

Server-Sent Events (SSE) with role-based delivery for real-time UI updates.

Control Events — UserControlEvent + SessionControlEvent

Security

A separate high-priority lane for security/control imperatives the UI must act on immediately. Rides alongside normal notifications but never queues behind them — the broker uses a per-subscriber PriorityBlockingQueue so a SESSION_REVOKED arriving 10ms after a routine EXPENSE_APPROVED still flushes first.

Two event types, one envelope:

Currently planned kinds:

Topic shape: two Kafka topics per ecosystem — ${product}.ui.control (single-partition, low volume, ordering matters) and ${product}.ui.notifications (multi-partition, normal traffic). Both cleanup.policy=delete with 24h retention.

Defense-in-depth stack: control events are layer 2. The hard floor is layer 1 (short access-token TTL, refusing refresh for disabled users); layer 3 is per-mutation requireUserEnabled(user) checks in service code. Each layer covers what the others can't.

Modules: nucleus-user-control-events (POJOs), nucleus-user-control-publisher (publish API), nucleus-ui-message-broker (the SSE service, one deployment per ecosystem).

Token Revocation — JWT denylist + disabled-user filter cache

Security

The UI control channel above handles active sessions with millisecond responsiveness. But a stolen JWT used outside the browser (curl, an attacker, a stale tab without SSE) doesn't go through the UI — it goes straight to a REST endpoint. That's where this layer kicks in.

JWS as a standard is stateless: signature + exp claim is the entire validation story. To revoke before exp, every service holds a local denylist populated via Kafka events on a separate, dedicated topic (${product}.token.revocation) — distinct from the UI control topic.

Two caches per service:

Both per-node, in-heap (ConcurrentHashMap). No shared infrastructure — each service holds its own copy.

TokenRevocationFilter runs after JWT signature/exp validation, before controller dispatch:

  1. Extract jti, username from validated JWT.
  2. If denylistCache.contains(jti) → 401, "Token revoked".
  3. If disabledUserCache.contains(username) → 401, "Account disabled".
  4. Otherwise → continue.

Event kinds (token revocation topic):

Why a separate topic from the UI control channel? Different consumers, different blast radii, different failure-mode tolerances. The broker is one consumer; the auth filter runs in every service. A misbehaving broker shouldn't be able to break the auth path. Two topics, cheap to have.

Defense-in-depth: this is layer 1b. Layer 1a is JWT signature/exp validation (nucleus-authentication). Layer 2 is the SSE control channel above. Layer 3 is per-mutation requireUserEnabled(user) guards in service code. Each layer covers what the others can't.

Module: nucleus-token-revocation — auto-registers the filter when on the classpath. Same pattern as nucleus-monitoring-client.

nucleus-mail

NEW Communication

Kafka-driven email pipeline with three stages: render, send, track. 2026-04 consolidation: nucleus-mail-renderer and nucleus-mail-sender were merged into nucleus-mail; the pipeline now ships as one deployable with internal stages. Standalone repos are archived.

nucleus-storage

Storage

MinIO/S3-compatible object storage abstraction.

nucleus-execution-events

Processing

Async job execution framework for long-running processes.

nucleus-config

Configuration

Spring Cloud Config Server backed by MySQL database.

nucleus-user

Identity

User management, role assignment, and access request workflows.

nucleus-address

NEW Integration

Google Maps Places API integration for address autocomplete and validation. Single module — ships the runnable service, the shared Address JPA entity, payloads, and MapStruct mappers used by every Nucleus service that stores addresses. (Previously two modules; nucleus-address-common was merged in.)

Managed AddressType values (27 enum-mirror seed rows) live as a bucket in nucleus-reference-data, not a per-entity admin endpoint.

nucleus-reference-data & nucleus-reference-data-service

NEW Framework

Generic bucketed primitive for any "managed list of codes" — address types, classification labels, status enumerations, lookup tables. One library + one entity (ReferenceDataItem) + one table (reference_data_item) replaces the per-entity admin services we used to build for every code list.

Two artifacts:

Schema — one table, bucket discriminator (nucleus-db-schema):

ColumnTypeNotes
idBIGINT (PK)Auto-generated
bucketVARCHARDiscriminator (AddressType, FooType, …)
codeVARCHARStable code (uppercase, unique within bucket)
labelVARCHARHuman-readable
sort_orderINTUI ordering hint
activeBOOLEANSoft-disable hides from selectors
system_ownedBOOLEANLocks seeded enum-mirror rows from delete/rename

REST API (mounted at /reference-data):

MethodPathAuth (delegated)
GET/{bucket}canView
GET/{bucket}/{id}canView
POST/{bucket}canManage
PUT/{bucket}/{id}canManage
DELETE/{bucket}/{id}canDelete
POST/{bucket}/filtercanView
GET/{bucket}/referencesauthenticated
POST/{bucket}/report/{pdf|excel|csv}canReport

Binary report endpoints carry @Operation + @ApiResponse declaring the matching mediaType — the typescript-angular generator emits responseType: 'blob' instead of defaulting to JSON. CSV writer is wrapped in try-with-resources.

Authorization — ReferenceDataSecurity SPI:

public interface ReferenceDataSecurity {
    boolean canView(String bucket, Authentication auth);
    boolean canManage(String bucket, Authentication auth);
    boolean canDelete(String bucket, Authentication auth);
    boolean canReport(String bucket, Authentication auth);
}

Each tenant registers a Spring bean named referenceDataSecurity implementing this SPI. The standalone service ships ConfigurableReferenceDataSecurity which reads bucket→authority maps from PROPERTIES so different deployments apply different policies without forking the controller.

UI — one generic admin route. The bookwise-ui (and goldfish-ui mirror) ship /administration/reference-data/:bucket served by ReferenceDataComponent. Adding a new bucket is a 2-line UI change (label + route entry) — no new component, no new service, no new OpenAPI client. The Node proxy route /api/reference-data is already wired with pathRewrite: { '^/api/reference-data': '' }.

Authoring rule. If you're about to build "manage a list of codes for X", do NOT create nucleus-X-type-service or x-types.component.ts. Pick a bucket name, add a migration row, register your auth policy, add the bucket label to the UI. Done. The original per-entity address-types component was deleted in the 2026-04-26 sweep precisely because of this rule.

Deployment ports: BookWise = 11026, GoldFish = 10026, TaskSense = 9026.

Session Tracking & MDC Diagnostic Framework

Observability

Every HTTP request carries diagnostic context from the browser through all backend services into log output. Support teams trace a user's entire session across microservices with a single ID.

How It Works

  1. Browser generates a uiSessionId on first request, persists in localStorage
  2. Angular global interceptor attaches X-Ui-Session-Id header to every outgoing HTTP request
  3. Backend filters populate SLF4J MDC per request — automatically included in every log line
  4. User shares their Session ID from the Support dialog (footer → Support → copy button)
  5. Support searches logs by session ID in the Log Explorer or live stream filter

MDC Filters (nucleus-core)

OrderFilterMDC KeysSource
1SecurityContextFilterusernameSpring Security authentication
2ContexPathFilterremote-ip, context-pathHttpServletRequest
3UiSessionMdcFilteruiSessionIdX-UI-Session-Id header or ui_session_id cookie

Support Workflow

  1. User reports an issue → opens Support dialog → copies Session ID
  2. Support opens Log Explorer → selects service → pastes Session ID in search
  3. All log entries from that browser session are displayed across all services
  4. For live debugging: enable remote logging → open Logs tab → paste ID in filter

The session ID is a random diagnostic key — not a JWT or auth token. Safe to share with support.

Field-Level Data Obfuscation

Security

Per-client, per-field encryption framework that allows users to hide sensitive data (transaction descriptions, memos) using AES/GCM. Data is encrypted at rest and decrypted only in-memory when needed for processing.

Encryption

PropertyValue
AlgorithmAES/GCM/NoPadding
Auth tag128-bit
IV96-bit (random per encryption)
Key storageuser_security_key table (Base64, per-user)
Encrypted formatENC:GCM:[Base64(IV+ciphertext)]

Obfuscation Lifecycle

  1. User clicks lock icon on a transaction → UI calls POST /transaction/obfuscate
  2. Service hashes the description, finds ALL matching transactions (batch obfuscation)
  3. Creates UserObfuscationRule for audit trail (entityName + fieldName + hash)
  4. Fetches client encryption key, encrypts description + memo with AES/GCM
  5. Sets display to "Hidden Transaction" / "Hidden Notes", obfuscated=true
  6. Unlock reverses: deletes rule, decrypts, restores plaintext

Cross-Service Integration

ServiceRole
nucleus-coreObfuscator — AES/GCM encrypt/decrypt engine
user-serviceStores encryption keys + obfuscation rules (audit trail)
transaction-servicePerforms obfuscation lifecycle, stores encrypted fields
categorizer-serviceDecrypts in-memory for AI classification (never logged)

Obfuscation Rules API

EndpointDescription
GET /user/{clientId}/obfuscation-rulesList all rules for a client
POST /user/{clientId}/obfuscation-rulesCreate rule (entityName, fieldName, literalHash)
GET /user/{clientId}/obfuscation-rules/existsCheck if rule exists
DELETE /user/{clientId}/obfuscation-rules/{ruleId}Delete rule

Security Guarantees

PII-Safe Logging Framework

Security

All services use a PII-safe logging framework that automatically sanitizes sensitive data before it reaches log files, monitoring systems, or log aggregators. This prevents accidental exposure of personal data.

Components

ComponentPurpose
@CustomLogLombok annotation replacing @Slf4j. Uses MaskingLoggerFactory for automatic PII sanitization.
PiiArgs.keyValue(key, obj)Structured argument for explicit sanitization when logging objects. Returns logstash-compatible StructuredArgument.
PiiSanitizerRecursively sanitizes POJOs, collections, maps. Handles cycles and nested objects.
@PiiField-level annotation declaring inline masking strategy (highest priority).

Configuration

Each service declares PII rules in src/main/resources/application-pii.yml:

default-strategy: NONE

fields:
  email: PARTIAL
  password: FULL
  token: FULL
  jwt: FULL
  ssn: FULL
  creditCard: HASH

class-fields:
  com.example.UserDto.phoneNumber: PARTIAL
  com.example.PaymentDto.cardNumber: HASH

Masking Strategies

StrategyOutputUse for
NONEValue as-isNon-sensitive fields
PARTIALd***n@email.comFields needing partial visibility (email, phone)
FULL********Secrets (passwords, tokens, SSN)
HASHa1b2c3d4Correlation without exposure (IDs, card numbers)

Resolution Order

  1. @Pii annotation on the field (highest priority)
  2. class-fields in application-pii.yml (FQCN.fieldName)
  3. fields in application-pii.yml (field name only)
  4. default-strategy in application-pii.yml (fallback)

Setup

1. Add lombok.config to each module:

lombok.log.custom.declaration = org.slf4j.Logger \
  com.nucleus.core.logger.MaskingLoggerFactory.getLogger(TYPE)

2. Replace @Slf4j with @CustomLog on all classes.

3. Use PiiArgs.keyValue() when logging objects:

log.info("Processing {}", PiiArgs.keyValue("request", payload));

Dependency Map

nucleus-parent  -->  nucleus-bom
nucleus-bom     -->  All modules (version management)

nucleus-core  -->  nucleus-connectors
              -->  nucleus-storage
              -->  nucleus-audit-core
              -->  nucleus-authentication
              -->  nucleus-execution-events

nucleus-connectors  -->  nucleus-monitoring-client
                    -->  nucleus-ui-message-broker

nucleus-mail-common  -->  nucleus-mail
                     -->  nucleus-mail-renderer
                     -->  nucleus-mail-sender

nucleus-audit-core  -->  All application services (transitive)

Configuration Properties

PropertyDefaultModule
kafka.service.registry.topic.namebookwise.service.registrymonitoring-client
kafka.remote.logging.topic.namebookwise.remote.logsmonitoring-client
monitoring.heartbeat.interval-ms30000monitoring-client
spring.config.import--All services
spring.profiles.active--All services

Tech Stack

LayerTechnology
BackendJava 17, Spring Boot 3.5, Spring Security, Spring Cloud Config
MessagingPluggable via @NucleusListener: Kafka, RabbitMQ, SQS, SNS, Kinesis, Google Pub/Sub, Azure Service Bus, Azure Queue Storage, ActiveMQ, Artemis, Solace (JMS + native), IBM MQ
AIProvider-agnostic AiClient: OpenAI, Anthropic (extensible)
DatabaseSpring Data JPA / Hibernate (database-agnostic: MySQL, PostgreSQL, Oracle, SQL Server)
StorageMinIO (S3-compatible object storage)
Real-timeServer-Sent Events (SSE) with role-based delivery
TestingSpock 2.4 / Groovy 4.x, JaCoCo coverage, 866+ tests
CI/CDJenkins pipelines, Artifactory, Bitbucket webhooks
APIOpenAPI 3.0 with generated TypeScript + Java WebClient clients