Transform your FinTech vision into realityPartner with GeekyAnts
Resources
37 min read

Chapter 26: Technical Resources

Executive Summary

This chapter provides a comprehensive technical resource library for FinTech consulting teams, including detailed technology comparisons, implementation templates, code samples, architecture patterns, and integration guides. These resources have been validated across hundreds of FinTech implementations and provide immediately actionable technical guidance for common scenarios and challenges.

The technical resources cover the full stack from infrastructure and databases to application frameworks and security implementations, with specific focus on the unique requirements of financial services including regulatory compliance, security, performance, and auditability.

Cloud Infrastructure Resources

Cloud Platform Comparison for FinTech

Comprehensive Platform Analysis

YAML Configuration

97 lines • 3400 characters

fintech_adoption:"70% of major financial institutions"string
"SOC 1, 2, 3 Type II compliant"string
"PCI DSS Level 1 compliant"string
"FIPS 140-2 Level 3 validated"string
"Common Criteria evaluated"string
"AWS for Financial Services Cloud"string
"Amazon Connect for contact centers"string
"Amazon Fraud Detector"string
"Amazon Personalize for recommendations"string
compute:"On-demand, reserved, spot pricing"string
storage:"Tiered pricing based on access frequency"string
data_transfer:"Free inbound, charged outbound"string
support:"Business support recommended ($100/month minimum)"string
"Extensive financial services ecosystem"string
"Advanced security and compliance features"string
"Global infrastructure with data residency options"string
"Comprehensive AI/ML services for fraud detection"string
"Complex pricing model"string
"Steep learning curve"string
"Vendor lock-in concerns"string
"Cost optimization requires expertise"string
fintech_adoption:"40% of major financial institutions"string
"ISO 27001, 27018 certified"string
"SOC 1, 2 compliant"string
"PCI DSS Level 1 compliant"string
"HIPAA BAA available"string
"Azure for Financial Services"string
"Azure Synapse Analytics"string
"Azure Cognitive Services"string
"Azure Security Center"string
compute:"Pay-as-you-use with reserved instance discounts"string
storage:"Hot, cool, archive tiers"string
data_transfer:"Free between regions in same geography"string
support:"Professional Direct recommended ($1,000/month)"string
"Strong integration with Microsoft ecosystem"string
"Advanced analytics and AI capabilities"string
"Hybrid cloud capabilities"string
"Competitive pricing for Windows workloads"string
"Less mature financial services ecosystem than AWS"string
"Some services not available in all regions"string
"Learning curve for non-Microsoft shops"string
"Limited spot instance options"string
fintech_adoption:"25% of major financial institutions"string
"ISO 27001, 27017, 27018 certified"string
"SOC 1, 2, 3 compliant"string
"PCI DSS Level 1 compliant"string
"FIPS 140-2 Level 3 validated"string
"Google Cloud for Financial Services"string
"BigQuery for analytics"string
"AutoML for custom models"string
"Chronicle for security operations"string
compute:"Per-second billing with sustained use discounts"string
storage:"Multi-regional, regional, nearline, coldline"string
data_transfer:"Free egress to other Google services"string
support:"Premium support ($12,500/month minimum)"string
"Leading AI/ML capabilities"string
"BigQuery for massive data analytics"string
"Per-second billing granularity"string
"Strong data analytics ecosystem"string
"Smallest financial services ecosystem"string
"Limited enterprise sales support"string
"Fewer compliance certifications"string
"Premium support very expensive"string
Tip: Use search to filter, click nodes to copy values

Infrastructure as Code Templates

Terraform Templates for FinTech

AWS FinTech Infrastructure Template:

hcl
# AWS FinTech Infrastructure Template
# Production-ready infrastructure for financial services

terraform {
  required_version = ">= 1.0"
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }

  backend "s3" {
    bucket         = "fintech-terraform-state"
    key            = "infrastructure/terraform.tfstate"
    region         = "us-east-1"
    encrypt        = true
    dynamodb_table = "terraform-locks"
  }
}

# Provider configuration with common tags
provider "aws" {
  region = var.aws_region

  default_tags {
    tags = {
      Environment   = var.environment
      Project       = var.project_name
      Owner         = var.owner
      CostCenter    = var.cost_center
      Compliance    = "PCI-DSS"
      DataClass     = "Confidential"
    }
  }
}

# Variables
variable "environment" {
  description = "Environment name"
  type        = string
  validation {
    condition = contains(["dev", "staging", "prod"], var.environment)
    error_message = "Environment must be dev, staging, or prod."
  }
}

variable "project_name" {
  description = "Project name for resource naming"
  type        = string
}

# VPC with private subnets for security
module "vpc" {
  source = "terraform-aws-modules/vpc/aws"
  version = "~> 5.0"

  name = "${var.project_name}-${var.environment}-vpc"
  cidr = "10.0.0.0/16"

  azs             = ["${var.aws_region}a", "${var.aws_region}b", "${var.aws_region}c"]
  private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
  public_subnets  = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
  database_subnets = ["10.0.201.0/24", "10.0.202.0/24", "10.0.203.0/24"]

  enable_nat_gateway = true
  enable_vpn_gateway = true
  enable_dns_hostnames = true
  enable_dns_support = true

  # Enable VPC Flow Logs for security monitoring
  enable_flow_log = true
  flow_log_destination_type = "cloud-watch-logs"
}

# Security Groups
resource "aws_security_group" "application" {
  name_prefix = "${var.project_name}-${var.environment}-app-"
  vpc_id      = module.vpc.vpc_id

  ingress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = [module.vpc.vpc_cidr_block]
  }

  ingress {
    from_port   = 8080
    to_port     = 8080
    protocol    = "tcp"
    cidr_blocks = [module.vpc.vpc_cidr_block]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "${var.project_name}-${var.environment}-app-sg"
  }
}

resource "aws_security_group" "database" {
  name_prefix = "${var.project_name}-${var.environment}-db-"
  vpc_id      = module.vpc.vpc_id

  ingress {
    from_port       = 5432
    to_port         = 5432
    protocol        = "tcp"
    security_groups = [aws_security_group.application.id]
  }

  tags = {
    Name = "${var.project_name}-${var.environment}-db-sg"
  }
}

# RDS PostgreSQL with encryption
resource "aws_db_subnet_group" "main" {
  name       = "${var.project_name}-${var.environment}-db-subnet-group"
  subnet_ids = module.vpc.database_subnets

  tags = {
    Name = "${var.project_name}-${var.environment}-db-subnet-group"
  }
}

resource "aws_db_instance" "main" {
  identifier = "${var.project_name}-${var.environment}-db"

  # Database configuration
  engine         = "postgres"
  engine_version = "15.4"
  instance_class = var.environment == "prod" ? "db.r6g.xlarge" : "db.t3.medium"

  # Storage configuration
  allocated_storage     = var.environment == "prod" ? 100 : 20
  max_allocated_storage = var.environment == "prod" ? 1000 : 100
  storage_encrypted     = true
  storage_type          = "gp3"

  # Database settings
  db_name  = var.database_name
  username = var.database_username
  password = var.database_password

  # Network configuration
  db_subnet_group_name   = aws_db_subnet_group.main.name
  vpc_security_group_ids = [aws_security_group.database.id]

  # Backup configuration
  backup_retention_period = var.environment == "prod" ? 30 : 7
  backup_window          = "03:00-04:00"
  maintenance_window     = "Sun:04:00-Sun:05:00"

  # Security configuration
  deletion_protection = var.environment == "prod" ? true : false
  skip_final_snapshot = var.environment == "prod" ? false : true

  # Monitoring
  monitoring_interval = 60
  monitoring_role_arn = aws_iam_role.rds_monitoring.arn

  # Performance Insights
  performance_insights_enabled = true
  performance_insights_retention_period = var.environment == "prod" ? 731 : 7

  tags = {
    Name = "${var.project_name}-${var.environment}-database"
  }
}

# EKS Cluster for containerized applications
module "eks" {
  source = "terraform-aws-modules/eks/aws"
  version = "~> 19.0"

  cluster_name    = "${var.project_name}-${var.environment}-eks"
  cluster_version = "1.28"

  vpc_id                         = module.vpc.vpc_id
  subnet_ids                     = module.vpc.private_subnets
  cluster_endpoint_public_access = false
  cluster_endpoint_private_access = true

  # OIDC Identity provider
  cluster_identity_providers = {
    sts = {
      client_id = "sts.amazonaws.com"
    }
  }

  # EKS Managed Node Groups
  eks_managed_node_groups = {
    main = {
      name = "main"

      instance_types = var.environment == "prod" ? ["m5.xlarge"] : ["t3.medium"]

      min_size     = var.environment == "prod" ? 3 : 1
      max_size     = var.environment == "prod" ? 10 : 3
      desired_size = var.environment == "prod" ? 3 : 2

      disk_size = 50
      disk_type = "gp3"

      labels = {
        Environment = var.environment
        NodeGroup   = "main"
      }

      update_config = {
        max_unavailable_percentage = 25
      }
    }
  }

  # aws-auth configmap
  manage_aws_auth_configmap = true

  tags = {
    Environment = var.environment
  }
}

# Application Load Balancer
resource "aws_lb" "main" {
  name               = "${var.project_name}-${var.environment}-alb"
  internal           = false
  load_balancer_type = "application"
  security_groups    = [aws_security_group.alb.id]
  subnets            = module.vpc.public_subnets

  enable_deletion_protection = var.environment == "prod" ? true : false

  access_logs {
    bucket  = aws_s3_bucket.alb_logs.bucket
    prefix  = "alb-logs"
    enabled = true
  }

  tags = {
    Name = "${var.project_name}-${var.environment}-alb"
  }
}

# S3 Bucket for ALB logs with encryption
resource "aws_s3_bucket" "alb_logs" {
  bucket = "${var.project_name}-${var.environment}-alb-logs-${random_string.bucket_suffix.result}"
}

resource "aws_s3_bucket_encryption_configuration" "alb_logs" {
  bucket = aws_s3_bucket.alb_logs.id

  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm = "AES256"
    }
  }
}

resource "aws_s3_bucket_public_access_block" "alb_logs" {
  bucket = aws_s3_bucket.alb_logs.id

  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}

# Random string for unique bucket naming
resource "random_string" "bucket_suffix" {
  length  = 8
  special = false
  upper   = false
}

# IAM role for RDS monitoring
resource "aws_iam_role" "rds_monitoring" {
  name = "${var.project_name}-${var.environment}-rds-monitoring-role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Principal = {
          Service = "monitoring.rds.amazonaws.com"
        }
      }
    ]
  })
}

resource "aws_iam_role_policy_attachment" "rds_monitoring" {
  role       = aws_iam_role.rds_monitoring.name
  policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonRDSEnhancedMonitoringRole"
}

# CloudWatch Log Groups for application logs
resource "aws_cloudwatch_log_group" "application" {
  name              = "/aws/eks/${var.project_name}-${var.environment}/application"
  retention_in_days = var.environment == "prod" ? 365 : 30

  tags = {
    Name = "${var.project_name}-${var.environment}-application-logs"
  }
}

# Outputs
output "vpc_id" {
  description = "ID of the VPC"
  value       = module.vpc.vpc_id
}

output "eks_cluster_endpoint" {
  description = "Endpoint for EKS control plane"
  value       = module.eks.cluster_endpoint
}

output "eks_cluster_name" {
  description = "Name of the EKS cluster"
  value       = module.eks.cluster_name
}

output "rds_endpoint" {
  description = "RDS instance endpoint"
  value       = aws_db_instance.main.endpoint
  sensitive   = true
}

output "load_balancer_dns" {
  description = "DNS name of the load balancer"
  value       = aws_lb.main.dns_name
}

Container Orchestration Templates

Kubernetes Manifests for FinTech Applications

Production-Ready Kubernetes Deployment:

YAML Configuration

352 lines • 7952 characters

Interactive Form Mode ActiveFill out fields below and click Save to generate YAML

Interactive Form Mode

Fill out the template • 24 fields detected

Data

Metadata

Spec

Tip: Your form data is validated before save/download

Required fields are marked with * and must be filled out. Scoring fields accept values between 1-10.

Tip: Use search to filter, click nodes to copy values

Database Resources

Database Design Patterns for FinTech

PostgreSQL Schema Templates

Core Banking Database Schema:

sql
-- Core Banking Database Schema
-- Designed for ACID compliance and audit requirements

-- Enable required extensions
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE EXTENSION IF NOT EXISTS "pgcrypto";
CREATE EXTENSION IF NOT EXISTS "pg_stat_statements";

-- Create schemas for organization
CREATE SCHEMA IF NOT EXISTS core_banking;
CREATE SCHEMA IF NOT EXISTS audit;
CREATE SCHEMA IF NOT EXISTS reporting;

-- Set search path
SET search_path TO core_banking, public;

-- Customers table with PII protection
CREATE TABLE customers (
    customer_id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
    customer_number VARCHAR(20) UNIQUE NOT NULL,

    -- Personal Information (encrypted)
    first_name_encrypted BYTEA NOT NULL,
    last_name_encrypted BYTEA NOT NULL,
    email_encrypted BYTEA NOT NULL,
    phone_encrypted BYTEA,
    ssn_encrypted BYTEA,

    -- Address Information (encrypted)
    address_line1_encrypted BYTEA,
    address_line2_encrypted BYTEA,
    city_encrypted BYTEA,
    state_code VARCHAR(2),
    postal_code_encrypted BYTEA,
    country_code VARCHAR(3) DEFAULT 'USA',

    -- Account Information
    date_of_birth DATE,
    customer_type VARCHAR(20) CHECK (customer_type IN ('INDIVIDUAL', 'BUSINESS', 'JOINT')),
    customer_status VARCHAR(20) DEFAULT 'ACTIVE' CHECK (customer_status IN ('ACTIVE', 'INACTIVE', 'SUSPENDED', 'CLOSED')),
    risk_rating VARCHAR(10) CHECK (risk_rating IN ('LOW', 'MEDIUM', 'HIGH')),

    -- KYC Information
    kyc_status VARCHAR(20) DEFAULT 'PENDING' CHECK (kyc_status IN ('PENDING', 'VERIFIED', 'REJECTED', 'EXPIRED')),
    kyc_verification_date TIMESTAMP WITH TIME ZONE,
    kyc_expiry_date TIMESTAMP WITH TIME ZONE,
    identity_verified BOOLEAN DEFAULT FALSE,
    address_verified BOOLEAN DEFAULT FALSE,

    -- Compliance flags
    pep_status BOOLEAN DEFAULT FALSE,  -- Politically Exposed Person
    sanctions_checked BOOLEAN DEFAULT FALSE,
    sanctions_check_date TIMESTAMP WITH TIME ZONE,
    ofac_checked BOOLEAN DEFAULT FALSE,
    ofac_check_date TIMESTAMP WITH TIME ZONE,

    -- Audit fields
    created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
    created_by VARCHAR(50) NOT NULL,
    updated_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
    updated_by VARCHAR(50) NOT NULL,
    version INTEGER DEFAULT 1
);

-- Create indexes for performance
CREATE INDEX idx_customers_customer_number ON customers(customer_number);
CREATE INDEX idx_customers_status ON customers(customer_status);
CREATE INDEX idx_customers_kyc_status ON customers(kyc_status);
CREATE INDEX idx_customers_created_at ON customers(created_at);

-- Accounts table
CREATE TABLE accounts (
    account_id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
    account_number VARCHAR(20) UNIQUE NOT NULL,
    customer_id UUID NOT NULL REFERENCES customers(customer_id),

    -- Account details
    account_type VARCHAR(20) NOT NULL CHECK (account_type IN ('CHECKING', 'SAVINGS', 'MONEY_MARKET', 'CD', 'LOAN', 'CREDIT_CARD')),
    account_subtype VARCHAR(50),
    account_name VARCHAR(100) NOT NULL,

    -- Balances (in cents to avoid floating point issues)
    current_balance BIGINT DEFAULT 0,
    available_balance BIGINT DEFAULT 0,
    pending_balance BIGINT DEFAULT 0,
    hold_balance BIGINT DEFAULT 0,

    -- Account limits
    daily_withdrawal_limit BIGINT,
    daily_transfer_limit BIGINT,
    monthly_transaction_limit INTEGER,
    overdraft_limit BIGINT DEFAULT 0,

    -- Account status and dates
    account_status VARCHAR(20) DEFAULT 'ACTIVE' CHECK (account_status IN ('ACTIVE', 'INACTIVE', 'FROZEN', 'CLOSED')),
    opened_date DATE NOT NULL DEFAULT CURRENT_DATE,
    closed_date DATE,
    last_activity_date TIMESTAMP WITH TIME ZONE,

    -- Interest and fees
    interest_rate DECIMAL(5,4) DEFAULT 0.0000,
    fee_schedule VARCHAR(50),
    minimum_balance BIGINT DEFAULT 0,

    -- Compliance
    cip_verified BOOLEAN DEFAULT FALSE,  -- Customer Identification Program
    cip_verification_date TIMESTAMP WITH TIME ZONE,
    regulatory_reporting BOOLEAN DEFAULT TRUE,

    -- Audit fields
    created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
    created_by VARCHAR(50) NOT NULL,
    updated_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
    updated_by VARCHAR(50) NOT NULL,
    version INTEGER DEFAULT 1,

    -- Constraints
    CONSTRAINT chk_balance_non_negative CHECK (current_balance >= 0 OR account_type IN ('LOAN', 'CREDIT_CARD')),
    CONSTRAINT chk_opened_before_closed CHECK (closed_date IS NULL OR closed_date >= opened_date)
);

-- Indexes for accounts
CREATE INDEX idx_accounts_customer_id ON accounts(customer_id);
CREATE INDEX idx_accounts_account_number ON accounts(account_number);
CREATE INDEX idx_accounts_account_type ON accounts(account_type);
CREATE INDEX idx_accounts_status ON accounts(account_status);
CREATE INDEX idx_accounts_opened_date ON accounts(opened_date);

-- Transactions table with immutable design
CREATE TABLE transactions (
    transaction_id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
    transaction_reference VARCHAR(50) UNIQUE NOT NULL,

    -- Account information
    account_id UUID NOT NULL REFERENCES accounts(account_id),
    contra_account_id UUID REFERENCES accounts(account_id),

    -- Transaction details
    transaction_type VARCHAR(20) NOT NULL CHECK (transaction_type IN ('DEBIT', 'CREDIT', 'TRANSFER', 'FEE', 'INTEREST')),
    transaction_category VARCHAR(50),
    transaction_description TEXT NOT NULL,

    -- Amounts (in cents)
    amount BIGINT NOT NULL CHECK (amount > 0),
    fee_amount BIGINT DEFAULT 0,

    -- Transaction status and timing
    transaction_status VARCHAR(20) DEFAULT 'PENDING' CHECK (transaction_status IN ('PENDING', 'POSTED', 'FAILED', 'REVERSED')),
    transaction_date DATE NOT NULL DEFAULT CURRENT_DATE,
    value_date DATE NOT NULL DEFAULT CURRENT_DATE,
    posted_timestamp TIMESTAMP WITH TIME ZONE,

    -- External references
    external_reference VARCHAR(100),
    channel VARCHAR(20) CHECK (channel IN ('ATM', 'ONLINE', 'MOBILE', 'BRANCH', 'WIRE', 'ACH', 'CHECK')),

    -- Location information
    merchant_name VARCHAR(100),
    merchant_category_code VARCHAR(4),
    terminal_id VARCHAR(50),
    location_city VARCHAR(50),
    location_state VARCHAR(2),
    location_country VARCHAR(3),

    -- Compliance and fraud
    aml_flagged BOOLEAN DEFAULT FALSE,
    aml_reviewed BOOLEAN DEFAULT FALSE,
    fraud_score INTEGER CHECK (fraud_score BETWEEN 0 AND 100),
    fraud_flagged BOOLEAN DEFAULT FALSE,

    -- Audit fields (immutable)
    created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP NOT NULL,
    created_by VARCHAR(50) NOT NULL,

    -- Constraints
    CONSTRAINT chk_contra_account_transfer CHECK (
        (transaction_type = 'TRANSFER' AND contra_account_id IS NOT NULL) OR
        (transaction_type != 'TRANSFER')
    )
);

-- Indexes for transactions (read-heavy table)
CREATE INDEX idx_transactions_account_id ON transactions(account_id);
CREATE INDEX idx_transactions_transaction_date ON transactions(transaction_date);
CREATE INDEX idx_transactions_posted_timestamp ON transactions(posted_timestamp);
CREATE INDEX idx_transactions_status ON transactions(transaction_status);
CREATE INDEX idx_transactions_type ON transactions(transaction_type);
CREATE INDEX idx_transactions_external_ref ON transactions(external_reference);
CREATE INDEX idx_transactions_aml_flagged ON transactions(aml_flagged) WHERE aml_flagged = TRUE;
CREATE INDEX idx_transactions_fraud_flagged ON transactions(fraud_flagged) WHERE fraud_flagged = TRUE;

-- Composite indexes for common queries
CREATE INDEX idx_transactions_account_date ON transactions(account_id, transaction_date DESC);
CREATE INDEX idx_transactions_account_status ON transactions(account_id, transaction_status);

-- Account balances history for point-in-time reporting
CREATE TABLE account_balance_snapshots (
    snapshot_id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
    account_id UUID NOT NULL REFERENCES accounts(account_id),
    snapshot_date DATE NOT NULL,

    -- Balance amounts (in cents)
    opening_balance BIGINT NOT NULL,
    closing_balance BIGINT NOT NULL,
    total_debits BIGINT DEFAULT 0,
    total_credits BIGINT DEFAULT 0,
    total_fees BIGINT DEFAULT 0,
    transaction_count INTEGER DEFAULT 0,

    -- Audit
    created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,

    UNIQUE(account_id, snapshot_date)
);

CREATE INDEX idx_balance_snapshots_account_date ON account_balance_snapshots(account_id, snapshot_date);
CREATE INDEX idx_balance_snapshots_date ON account_balance_snapshots(snapshot_date);

-- Audit trail table
CREATE TABLE audit.audit_trail (
    audit_id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
    table_name VARCHAR(50) NOT NULL,
    record_id UUID NOT NULL,
    operation VARCHAR(10) NOT NULL CHECK (operation IN ('INSERT', 'UPDATE', 'DELETE')),

    -- Change details
    old_values JSONB,
    new_values JSONB,
    changed_fields TEXT[],

    -- Context
    user_id VARCHAR(50) NOT NULL,
    session_id VARCHAR(100),
    client_ip INET,
    user_agent TEXT,

    -- Timing
    timestamp TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP NOT NULL,

    -- Compliance
    reason_code VARCHAR(20),
    business_justification TEXT
);

-- Indexes for audit trail
CREATE INDEX idx_audit_trail_table_record ON audit.audit_trail(table_name, record_id);
CREATE INDEX idx_audit_trail_user_id ON audit.audit_trail(user_id);
CREATE INDEX idx_audit_trail_timestamp ON audit.audit_trail(timestamp);

-- Functions for audit trail
CREATE OR REPLACE FUNCTION audit.create_audit_trigger_function()
RETURNS TRIGGER AS $$
BEGIN
    IF TG_OP = 'DELETE' THEN
        INSERT INTO audit.audit_trail (
            table_name, record_id, operation, old_values, user_id
        ) VALUES (
            TG_TABLE_NAME::TEXT,
            OLD.customer_id,  -- Assumes all tables have an ID field
            TG_OP,
            row_to_json(OLD),
            current_setting('app.current_user_id', true)
        );
        RETURN OLD;
    ELSIF TG_OP = 'UPDATE' THEN
        INSERT INTO audit.audit_trail (
            table_name, record_id, operation, old_values, new_values, user_id
        ) VALUES (
            TG_TABLE_NAME::TEXT,
            NEW.customer_id,
            TG_OP,
            row_to_json(OLD),
            row_to_json(NEW),
            current_setting('app.current_user_id', true)
        );
        RETURN NEW;
    ELSIF TG_OP = 'INSERT' THEN
        INSERT INTO audit.audit_trail (
            table_name, record_id, operation, new_values, user_id
        ) VALUES (
            TG_TABLE_NAME::TEXT,
            NEW.customer_id,
            TG_OP,
            row_to_json(NEW),
            current_setting('app.current_user_id', true)
        );
        RETURN NEW;
    END IF;
    RETURN NULL;
END;
$$ LANGUAGE plpgsql;

-- Create audit triggers
CREATE TRIGGER customers_audit_trigger
    AFTER INSERT OR UPDATE OR DELETE ON customers
    FOR EACH ROW EXECUTE FUNCTION audit.create_audit_trigger_function();

CREATE TRIGGER accounts_audit_trigger
    AFTER INSERT OR UPDATE OR DELETE ON accounts
    FOR EACH ROW EXECUTE FUNCTION audit.create_audit_trigger_function();

-- Transaction update trigger to maintain account balances
CREATE OR REPLACE FUNCTION update_account_balance()
RETURNS TRIGGER AS $$
BEGIN
    IF TG_OP = 'INSERT' AND NEW.transaction_status = 'POSTED' THEN
        IF NEW.transaction_type = 'CREDIT' THEN
            UPDATE accounts
            SET current_balance = current_balance + NEW.amount,
                available_balance = available_balance + NEW.amount,
                last_activity_date = CURRENT_TIMESTAMP,
                updated_at = CURRENT_TIMESTAMP,
                version = version + 1
            WHERE account_id = NEW.account_id;
        ELSIF NEW.transaction_type = 'DEBIT' THEN
            UPDATE accounts
            SET current_balance = current_balance - NEW.amount,
                available_balance = available_balance - NEW.amount,
                last_activity_date = CURRENT_TIMESTAMP,
                updated_at = CURRENT_TIMESTAMP,
                version = version + 1
            WHERE account_id = NEW.account_id;
        END IF;
    END IF;

    RETURN NEW;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER transaction_balance_trigger
    AFTER INSERT ON transactions
    FOR EACH ROW EXECUTE FUNCTION update_account_balance();

-- Row Level Security for data protection
ALTER TABLE customers ENABLE ROW LEVEL SECURITY;
ALTER TABLE accounts ENABLE ROW LEVEL SECURITY;
ALTER TABLE transactions ENABLE ROW LEVEL SECURITY;

-- Policies for customer service representatives
CREATE POLICY customer_service_policy ON customers
    FOR ALL TO customer_service_role
    USING (true);  -- Can see all customers

CREATE POLICY account_service_policy ON accounts
    FOR ALL TO customer_service_role
    USING (true);  -- Can see all accounts

-- Policies for customers (self-service)
CREATE POLICY customer_self_service ON customers
    FOR SELECT TO customer_role
    USING (customer_id = current_setting('app.current_customer_id')::UUID);

CREATE POLICY account_self_service ON accounts
    FOR SELECT TO customer_role
    USING (customer_id = current_setting('app.current_customer_id')::UUID);

CREATE POLICY transaction_self_service ON transactions
    FOR SELECT TO customer_role
    USING (account_id IN (
        SELECT account_id FROM accounts
        WHERE customer_id = current_setting('app.current_customer_id')::UUID
    ));

-- Grant permissions
GRANT USAGE ON SCHEMA core_banking TO customer_service_role, customer_role;
GRANT SELECT, INSERT, UPDATE ON ALL TABLES IN SCHEMA core_banking TO customer_service_role;
GRANT SELECT ON customers, accounts, transactions TO customer_role;

Database Performance Optimization

Query Optimization Templates

Common FinTech Query Patterns:

sql
-- High-Performance Query Patterns for FinTech Applications

-- 1. Account Balance Inquiry (Most Common Query)
-- Optimized version with proper indexing
EXPLAIN (ANALYZE, BUFFERS, FORMAT JSON)
SELECT
    a.account_number,
    a.account_type,
    a.current_balance,
    a.available_balance,
    a.last_activity_date
FROM accounts a
WHERE a.account_number = $1
    AND a.account_status = 'ACTIVE';

-- Index to support this query
CREATE INDEX CONCURRENTLY idx_accounts_number_status
ON accounts (account_number, account_status)
WHERE account_status = 'ACTIVE';

-- 2. Transaction History with Pagination
-- Optimized for large transaction volumes
WITH transaction_page AS (
    SELECT
        t.transaction_id,
        t.transaction_reference,
        t.transaction_type,
        t.transaction_description,
        t.amount,
        t.transaction_date,
        t.posted_timestamp,
        t.merchant_name,
        ROW_NUMBER() OVER (ORDER BY t.posted_timestamp DESC) as rn
    FROM transactions t
    WHERE t.account_id = $1
        AND t.transaction_status = 'POSTED'
        AND t.transaction_date >= $2  -- Date range filter
        AND t.transaction_date <= $3
)
SELECT * FROM transaction_page
WHERE rn BETWEEN $4 AND $5;  -- Pagination

-- Supporting index
CREATE INDEX CONCURRENTLY idx_transactions_account_date_status
ON transactions (account_id, transaction_date DESC, transaction_status)
WHERE transaction_status = 'POSTED';

-- 3. Daily Balance Calculation (for reporting)
-- Efficient end-of-day balance calculation
WITH daily_transactions AS (
    SELECT
        account_id,
        transaction_date,
        SUM(CASE WHEN transaction_type = 'CREDIT' THEN amount ELSE 0 END) as total_credits,
        SUM(CASE WHEN transaction_type = 'DEBIT' THEN amount ELSE 0 END) as total_debits,
        COUNT(*) as transaction_count
    FROM transactions
    WHERE transaction_status = 'POSTED'
        AND transaction_date = $1
    GROUP BY account_id, transaction_date
),
previous_balance AS (
    SELECT
        account_id,
        closing_balance as opening_balance
    FROM account_balance_snapshots
    WHERE snapshot_date = $1 - INTERVAL '1 day'
)
INSERT INTO account_balance_snapshots (
    account_id,
    snapshot_date,
    opening_balance,
    closing_balance,
    total_debits,
    total_credits,
    transaction_count
)
SELECT
    dt.account_id,
    dt.transaction_date,
    COALESCE(pb.opening_balance, 0),
    COALESCE(pb.opening_balance, 0) + dt.total_credits - dt.total_debits,
    dt.total_debits,
    dt.total_credits,
    dt.transaction_count
FROM daily_transactions dt
LEFT JOIN previous_balance pb ON dt.account_id = pb.account_id
ON CONFLICT (account_id, snapshot_date)
DO UPDATE SET
    closing_balance = EXCLUDED.closing_balance,
    total_debits = EXCLUDED.total_debits,
    total_credits = EXCLUDED.total_credits,
    transaction_count = EXCLUDED.transaction_count;

-- 4. AML/Fraud Detection Query
-- Finding suspicious transaction patterns
WITH suspicious_patterns AS (
    -- Large cash transactions
    SELECT DISTINCT t.account_id, 'LARGE_CASH' as pattern_type
    FROM transactions t
    WHERE t.transaction_date >= CURRENT_DATE - INTERVAL '30 days'
        AND t.amount >= 1000000  -- $10,000 in cents
        AND t.channel IN ('ATM', 'BRANCH')
        AND t.transaction_status = 'POSTED'

    UNION ALL

    -- Rapid succession transactions
    SELECT DISTINCT t1.account_id, 'RAPID_SUCCESSION' as pattern_type
    FROM transactions t1
    JOIN transactions t2 ON t1.account_id = t2.account_id
        AND t2.posted_timestamp > t1.posted_timestamp
        AND t2.posted_timestamp <= t1.posted_timestamp + INTERVAL '1 hour'
    WHERE t1.transaction_date >= CURRENT_DATE - INTERVAL '7 days'
        AND t1.transaction_status = 'POSTED'
        AND t2.transaction_status = 'POSTED'
    GROUP BY t1.account_id
    HAVING COUNT(*) >= 5  -- 5 or more transactions in 1 hour

    UNION ALL

    -- Unusual geographic activity
    SELECT DISTINCT t.account_id, 'GEOGRAPHIC_ANOMALY' as pattern_type
    FROM transactions t
    WHERE t.transaction_date >= CURRENT_DATE - INTERVAL '1 day'
        AND t.location_country != 'USA'
        AND t.transaction_status = 'POSTED'
        AND EXISTS (
            SELECT 1 FROM transactions t2
            WHERE t2.account_id = t.account_id
                AND t2.transaction_date >= CURRENT_DATE - INTERVAL '1 day'
                AND t2.location_country = 'USA'
                AND t2.posted_timestamp < t.posted_timestamp - INTERVAL '2 hours'
        )
)
SELECT
    sp.account_id,
    c.customer_number,
    a.account_number,
    ARRAY_AGG(DISTINCT sp.pattern_type) as suspicious_patterns,
    COUNT(DISTINCT sp.pattern_type) as pattern_count
FROM suspicious_patterns sp
JOIN accounts a ON sp.account_id = a.account_id
JOIN customers c ON a.customer_id = c.customer_id
WHERE NOT EXISTS (
    SELECT 1 FROM transactions t
    WHERE t.account_id = sp.account_id
        AND t.aml_reviewed = TRUE
        AND t.transaction_date >= CURRENT_DATE - INTERVAL '30 days'
)
GROUP BY sp.account_id, c.customer_number, a.account_number
ORDER BY pattern_count DESC;

-- Supporting indexes for AML queries
CREATE INDEX CONCURRENTLY idx_transactions_amount_channel
ON transactions (transaction_date, amount, channel)
WHERE transaction_status = 'POSTED';

CREATE INDEX CONCURRENTLY idx_transactions_location_country
ON transactions (account_id, location_country, posted_timestamp)
WHERE transaction_status = 'POSTED';

-- 5. Customer 360 View Query
-- Comprehensive customer information for service representatives
WITH customer_summary AS (
    SELECT
        c.customer_id,
        c.customer_number,
        -- Decrypt PII for authorized users only
        CASE WHEN current_setting('app.user_role') = 'customer_service'
             THEN pgp_sym_decrypt(c.first_name_encrypted, current_setting('app.encryption_key'))
             ELSE '***' END as first_name,
        CASE WHEN current_setting('app.user_role') = 'customer_service'
             THEN pgp_sym_decrypt(c.last_name_encrypted, current_setting('app.encryption_key'))
             ELSE '***' END as last_name,
        c.customer_type,
        c.customer_status,
        c.kyc_status,
        c.risk_rating,
        c.created_at as customer_since
    FROM customers c
    WHERE c.customer_number = $1
),
account_summary AS (
    SELECT
        a.customer_id,
        COUNT(*) as total_accounts,
        COUNT(*) FILTER (WHERE a.account_status = 'ACTIVE') as active_accounts,
        SUM(a.current_balance) as total_balance,
        MAX(a.last_activity_date) as last_activity
    FROM accounts a
    WHERE a.customer_id = (SELECT customer_id FROM customer_summary)
    GROUP BY a.customer_id
),
recent_transactions AS (
    SELECT
        COUNT(*) as transaction_count_30d,
        COUNT(*) FILTER (WHERE t.aml_flagged = TRUE) as aml_flagged_count,
        COUNT(*) FILTER (WHERE t.fraud_flagged = TRUE) as fraud_flagged_count
    FROM transactions t
    JOIN accounts a ON t.account_id = a.account_id
    WHERE a.customer_id = (SELECT customer_id FROM customer_summary)
        AND t.transaction_date >= CURRENT_DATE - INTERVAL '30 days'
        AND t.transaction_status = 'POSTED'
)
SELECT
    cs.*,
    COALESCE(as_.total_accounts, 0) as total_accounts,
    COALESCE(as_.active_accounts, 0) as active_accounts,
    COALESCE(as_.total_balance, 0) as total_balance,
    as_.last_activity,
    COALESCE(rt.transaction_count_30d, 0) as recent_transaction_count,
    COALESCE(rt.aml_flagged_count, 0) as aml_flags,
    COALESCE(rt.fraud_flagged_count, 0) as fraud_flags
FROM customer_summary cs
LEFT JOIN account_summary as_ ON cs.customer_id = as_.customer_id
CROSS JOIN recent_transactions rt;

-- 6. Compliance Reporting Query
-- Monthly BSA/AML reporting
SELECT
    DATE_TRUNC('month', t.transaction_date) as report_month,
    COUNT(*) FILTER (WHERE t.amount >= 1000000) as large_transactions,  -- $10K+
    COUNT(DISTINCT t.account_id) FILTER (WHERE t.amount >= 1000000) as accounts_with_large_txns,
    COUNT(*) FILTER (WHERE t.aml_flagged = TRUE) as aml_flagged_transactions,
    COUNT(*) FILTER (WHERE t.aml_flagged = TRUE AND t.aml_reviewed = FALSE) as pending_aml_review,
    SUM(t.amount) FILTER (WHERE t.amount >= 1000000) as total_large_transaction_amount,
    AVG(t.amount) FILTER (WHERE t.amount >= 1000000) as avg_large_transaction_amount
FROM transactions t
WHERE t.transaction_date >= DATE_TRUNC('year', CURRENT_DATE)
    AND t.transaction_status = 'POSTED'
GROUP BY DATE_TRUNC('month', t.transaction_date)
ORDER BY report_month;

-- Performance monitoring query
-- Check for slow queries and optimize
SELECT
    query,
    calls,
    total_time,
    mean_time,
    rows,
    100.0 * shared_blks_hit / nullif(shared_blks_hit + shared_blks_read, 0) AS hit_percent
FROM pg_stat_statements
WHERE mean_time > 100  -- Queries taking more than 100ms on average
ORDER BY mean_time DESC
LIMIT 20;

Application Framework Resources

Spring Boot FinTech Application Template

Production-Ready Spring Boot Configuration

Application Configuration:

java
// Spring Boot FinTech Application Template
// Production-ready configuration for financial services

package com.fintech.banking;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cache.annotation.EnableCaching;
import org.springframework.scheduling.annotation.EnableAsync;
import org.springframework.scheduling.annotation.EnableScheduling;
import org.springframework.security.config.annotation.method.configuration.EnableGlobalMethodSecurity;
import org.springframework.transaction.annotation.EnableTransactionManagement;

@SpringBootApplication
@EnableCaching
@EnableAsync
@EnableScheduling
@EnableTransactionManagement
@EnableGlobalMethodSecurity(prePostEnabled = true, securedEnabled = true)
public class FintechBankingApplication {
    public static void main(String[] args) {
        // Set system properties for security
        System.setProperty("spring.jmx.enabled", "false");
        System.setProperty("endpoints.jmx.enabled", "false");

        SpringApplication application = new SpringApplication(FintechBankingApplication.class);

        // Disable banner for production
        application.setBannerMode(Banner.Mode.OFF);

        // Set default properties
        application.setDefaultProperties(Map.of(
            "spring.profiles.active", "production",
            "management.endpoints.web.exposure.include", "health,metrics,prometheus",
            "management.endpoint.health.show-details", "when-authorized"
        ));

        application.run(args);
    }
}

// Security Configuration
@Configuration
@EnableWebSecurity
@EnableGlobalMethodSecurity(prePostEnabled = true)
public class SecurityConfig {

    @Autowired
    private JwtAuthenticationEntryPoint jwtAuthenticationEntryPoint;

    @Autowired
    private JwtRequestFilter jwtRequestFilter;

    @Bean
    public PasswordEncoder passwordEncoder() {
        return new BCryptPasswordEncoder(12); // Higher cost for financial applications
    }

    @Bean
    public AuthenticationManager authenticationManager(
            AuthenticationConfiguration authConfig) throws Exception {
        return authConfig.getAuthenticationManager();
    }

    @Bean
    public SecurityFilterChain filterChain(HttpSecurity http) throws Exception {
        http.csrf().disable()
            .authorizeHttpRequests(authz -> authz
                .requestMatchers("/api/auth/**").permitAll()
                .requestMatchers("/actuator/health").permitAll()
                .requestMatchers("/api/public/**").permitAll()
                .requestMatchers(HttpMethod.GET, "/api/accounts/**").hasRole("USER")
                .requestMatchers(HttpMethod.POST, "/api/transactions/**").hasRole("USER")
                .requestMatchers("/api/admin/**").hasRole("ADMIN")
                .anyRequest().authenticated()
            )
            .exceptionHandling().authenticationEntryPoint(jwtAuthenticationEntryPoint)
            .and()
            .sessionManagement().sessionCreationPolicy(SessionCreationPolicy.STATELESS)
            .and()
            .headers(headers -> headers
                .frameOptions().deny()
                .contentTypeOptions().and()
                .xssProtection().and()
                .referrerPolicy(ReferrerPolicy.STRICT_ORIGIN_WHEN_CROSS_ORIGIN)
                .httpStrictTransportSecurity(hstsConfig -> hstsConfig
                    .maxAgeInSeconds(31536000)
                    .includeSubdomains(true)
                    .preload(true)
                )
            );

        http.addFilterBefore(jwtRequestFilter, UsernamePasswordAuthenticationFilter.class);

        return http.build();
    }
}

// Database Configuration
@Configuration
@EnableJpaRepositories(basePackages = "com.fintech.banking.repository")
@EnableTransactionManagement
public class DatabaseConfig {

    @Bean
    @Primary
    @ConfigurationProperties("spring.datasource.primary")
    public DataSource primaryDataSource() {
        HikariConfig config = new HikariConfig();
        config.setMaximumPoolSize(20);
        config.setMinimumIdle(5);
        config.setConnectionTimeout(30000);
        config.setIdleTimeout(600000);
        config.setMaxLifetime(1800000);
        config.setLeakDetectionThreshold(60000);

        // Security settings
        config.addDataSourceProperty("ssl", "true");
        config.addDataSourceProperty("sslmode", "require");
        config.addDataSourceProperty("prepareThreshold", "0");

        return new HikariDataSource(config);
    }

    @Bean
    @Primary
    public LocalContainerEntityManagerFactoryBean entityManagerFactory(
            EntityManagerFactoryBuilder builder, @Qualifier("primaryDataSource") DataSource dataSource) {

        return builder
                .dataSource(dataSource)
                .packages("com.fintech.banking.entity")
                .persistenceUnit("primary")
                .properties(hibernateProperties())
                .build();
    }

    @Bean
    @Primary
    public PlatformTransactionManager transactionManager(
            @Qualifier("entityManagerFactory") EntityManagerFactory entityManagerFactory) {
        return new JpaTransactionManager(entityManagerFactory);
    }

    private Map<String, Object> hibernateProperties() {
        Map<String, Object> properties = new HashMap<>();
        properties.put("hibernate.dialect", "org.hibernate.dialect.PostgreSQLDialect");
        properties.put("hibernate.hbm2ddl.auto", "validate");
        properties.put("hibernate.show_sql", "false");
        properties.put("hibernate.format_sql", "false");
        properties.put("hibernate.jdbc.batch_size", "25");
        properties.put("hibernate.order_inserts", "true");
        properties.put("hibernate.order_updates", "true");
        properties.put("hibernate.jdbc.batch_versioned_data", "true");
        properties.put("hibernate.connection.provider_disables_autocommit", "true");

        return properties;
    }
}

// Audit Configuration
@Configuration
@EnableJpaAuditing(auditorAwareRef = "auditorProvider")
public class AuditConfig {

    @Bean
    public AuditorAware<String> auditorProvider() {
        return new SpringSecurityAuditorAware();
    }
}

public class SpringSecurityAuditorAware implements AuditorAware<String> {

    @Override
    public Optional<String> getCurrentAuditor() {
        Authentication authentication = SecurityContextHolder.getContext().getAuthentication();

        if (authentication == null || !authentication.isAuthenticated()
            || authentication instanceof AnonymousAuthenticationToken) {
            return Optional.of("SYSTEM");
        }

        return Optional.ofNullable(authentication.getName());
    }
}

// Cache Configuration
@Configuration
@EnableCaching
public class CacheConfig {

    @Bean
    public CacheManager cacheManager() {
        RedisCacheManager.Builder builder = RedisCacheManager
                .RedisCacheManagerBuilder
                .fromConnectionFactory(redisConnectionFactory())
                .cacheDefaults(cacheConfiguration());

        return builder.build();
    }

    @Bean
    public LettuceConnectionFactory redisConnectionFactory() {
        LettuceClientConfiguration clientConfig = LettuceClientConfiguration.builder()
                .commandTimeout(Duration.ofSeconds(2))
                .shutdownTimeout(Duration.ZERO)
                .build();

        return new LettuceConnectionFactory(new RedisStandaloneConfiguration(), clientConfig);
    }

    private RedisCacheConfiguration cacheConfiguration() {
        return RedisCacheConfiguration.defaultCacheConfig()
                .entryTtl(Duration.ofMinutes(10))
                .serializeKeysWith(RedisSerializationContext.SerializationPair
                        .fromSerializer(new StringRedisSerializer()))
                .serializeValuesWith(RedisSerializationContext.SerializationPair
                        .fromSerializer(new GenericJackson2JsonRedisSerializer()));
    }
}

// Monitoring and Metrics Configuration
@Configuration
public class MetricsConfig {

    @Bean
    public TimedAspect timedAspect(MeterRegistry registry) {
        return new TimedAspect(registry);
    }

    @Bean
    public CountedAspect countedAspect(MeterRegistry registry) {
        return new CountedAspect(registry);
    }

    @Bean
    @ConditionalOnMissingBean
    public MeterRegistryCustomizer<MeterRegistry> metricsCommonTags() {
        return registry -> registry.config().commonTags(
                "application", "fintech-banking",
                "environment", getEnvironment()
        );
    }

    private String getEnvironment() {
        return Optional.ofNullable(System.getenv("SPRING_PROFILES_ACTIVE"))
                .orElse("unknown");
    }
}

// Error Handling Configuration
@ControllerAdvice
@Slf4j
public class GlobalExceptionHandler {

    @ExceptionHandler(ValidationException.class)
    @ResponseStatus(HttpStatus.BAD_REQUEST)
    public ResponseEntity<ErrorResponse> handleValidationException(ValidationException ex) {
        log.warn("Validation error: {}", ex.getMessage());

        ErrorResponse errorResponse = ErrorResponse.builder()
                .timestamp(Instant.now())
                .status(HttpStatus.BAD_REQUEST.value())
                .error("Validation Failed")
                .message(ex.getMessage())
                .path(getCurrentPath())
                .build();

        return ResponseEntity.badRequest().body(errorResponse);
    }

    @ExceptionHandler(InsufficientFundsException.class)
    @ResponseStatus(HttpStatus.UNPROCESSABLE_ENTITY)
    public ResponseEntity<ErrorResponse> handleInsufficientFundsException(InsufficientFundsException ex) {
        log.warn("Insufficient funds: {}", ex.getMessage());

        ErrorResponse errorResponse = ErrorResponse.builder()
                .timestamp(Instant.now())
                .status(HttpStatus.UNPROCESSABLE_ENTITY.value())
                .error("Insufficient Funds")
                .message("Transaction cannot be completed due to insufficient funds")
                .path(getCurrentPath())
                .build();

        return ResponseEntity.unprocessableEntity().body(errorResponse);
    }

    @ExceptionHandler(Exception.class)
    @ResponseStatus(HttpStatus.INTERNAL_SERVER_ERROR)
    public ResponseEntity<ErrorResponse> handleGenericException(Exception ex) {
        log.error("Unexpected error occurred", ex);

        ErrorResponse errorResponse = ErrorResponse.builder()
                .timestamp(Instant.now())
                .status(HttpStatus.INTERNAL_SERVER_ERROR.value())
                .error("Internal Server Error")
                .message("An unexpected error occurred")
                .path(getCurrentPath())
                .build();

        return ResponseEntity.internalServerError().body(errorResponse);
    }

    private String getCurrentPath() {
        RequestAttributes requestAttributes = RequestContextHolder.getRequestAttributes();
        if (requestAttributes instanceof ServletRequestAttributes) {
            HttpServletRequest request = ((ServletRequestAttributes) requestAttributes).getRequest();
            return request.getRequestURI();
        }
        return "unknown";
    }
}

// Application Properties Template

application.yml Configuration:

YAML Configuration

290 lines • 7329 characters

Interactive Form Mode ActiveFill out fields below and click Save to generate YAML

Interactive Form Mode

Fill out the template • 5 fields detected

Spring

Logging

Fintech

Tip: Your form data is validated before save/download

Required fields are marked with * and must be filled out. Scoring fields accept values between 1-10.

Tip: Use search to filter, click nodes to copy values

API Security Implementation

JWT Authentication Service

java
// JWT Authentication Implementation for FinTech
// Production-ready with security best practices

@Service
@Slf4j
public class JwtTokenService {

    private final String jwtSecret;
    private final int jwtExpiration;
    private final int jwtRefreshExpiration;
    private final RedisTemplate<String, String> redisTemplate;

    public JwtTokenService(
            @Value("${spring.security.jwt.secret}") String jwtSecret,
            @Value("${spring.security.jwt.expiration}") int jwtExpiration,
            @Value("${spring.security.jwt.refresh-expiration}") int jwtRefreshExpiration,
            RedisTemplate<String, String> redisTemplate) {
        this.jwtSecret = jwtSecret;
        this.jwtExpiration = jwtExpiration;
        this.jwtRefreshExpiration = jwtRefreshExpiration;
        this.redisTemplate = redisTemplate;
    }

    public String generateToken(UserDetails userDetails, String sessionId, String clientIp) {
        Map<String, Object> claims = new HashMap<>();
        claims.put("sessionId", sessionId);
        claims.put("clientIp", clientIp);
        claims.put("roles", userDetails.getAuthorities().stream()
                .map(GrantedAuthority::getAuthority)
                .collect(Collectors.toList()));
        claims.put("tokenType", "ACCESS");

        String token = createToken(claims, userDetails.getUsername(), jwtExpiration);

        // Store token in Redis with expiration
        redisTemplate.opsForValue().set(
                "token:" + extractJti(token),
                userDetails.getUsername(),
                Duration.ofSeconds(jwtExpiration)
        );

        // Store session mapping
        redisTemplate.opsForValue().set(
                "session:" + sessionId,
                userDetails.getUsername(),
                Duration.ofSeconds(jwtExpiration)
        );

        log.info("JWT token generated for user: {} from IP: {}",
                userDetails.getUsername(), clientIp);

        return token;
    }

    public String generateRefreshToken(UserDetails userDetails, String sessionId) {
        Map<String, Object> claims = new HashMap<>();
        claims.put("sessionId", sessionId);
        claims.put("tokenType", "REFRESH");

        String refreshToken = createToken(claims, userDetails.getUsername(), jwtRefreshExpiration);

        // Store refresh token in Redis
        redisTemplate.opsForValue().set(
                "refresh:" + extractJti(refreshToken),
                userDetails.getUsername(),
                Duration.ofSeconds(jwtRefreshExpiration)
        );

        return refreshToken;
    }

    private String createToken(Map<String, Object> claims, String subject, int expiration) {
        Date now = new Date();
        Date expiryDate = new Date(now.getTime() + expiration * 1000L);

        return Jwts.builder()
                .setClaims(claims)
                .setSubject(subject)
                .setIssuedAt(now)
                .setExpiration(expiryDate)
                .setId(UUID.randomUUID().toString()) // JTI for token revocation
                .setIssuer("fintech-banking-app")
                .setAudience("fintech-banking-users")
                .signWith(SignatureAlgorithm.HS512, jwtSecret)
                .compact();
    }

    public Boolean validateToken(String token, UserDetails userDetails, String clientIp) {
        try {
            // Check if token is blacklisted
            if (isTokenBlacklisted(token)) {
                log.warn("Attempt to use blacklisted token by user: {}", userDetails.getUsername());
                return false;
            }

            final String username = getUsernameFromToken(token);
            final String tokenClientIp = getClientIpFromToken(token);
            final String sessionId = getSessionIdFromToken(token);

            // Validate username
            if (!username.equals(userDetails.getUsername())) {
                log.warn("Token username mismatch: expected {}, got {}",
                        userDetails.getUsername(), username);
                return false;
            }

            // Validate client IP (optional, based on security requirements)
            if (tokenClientIp != null && !tokenClientIp.equals(clientIp)) {
                log.warn("Token IP mismatch for user {}: expected {}, got {}",
                        username, tokenClientIp, clientIp);
                // Consider this a security warning but don't reject for mobile users
                // return false;
            }

            // Check if session is still valid
            String sessionUser = redisTemplate.opsForValue().get("session:" + sessionId);
            if (sessionUser == null || !sessionUser.equals(username)) {
                log.warn("Invalid session for user: {}", username);
                return false;
            }

            // Validate token expiration
            return !isTokenExpired(token);

        } catch (JwtException e) {
            log.error("JWT validation error for user {}: {}", userDetails.getUsername(), e.getMessage());
            return false;
        }
    }

    public String getUsernameFromToken(String token) {
        return getClaimFromToken(token, Claims::getSubject);
    }

    public Date getExpirationDateFromToken(String token) {
        return getClaimFromToken(token, Claims::getExpiration);
    }

    public String getSessionIdFromToken(String token) {
        return getClaimFromToken(token, claims -> claims.get("sessionId", String.class));
    }

    public String getClientIpFromToken(String token) {
        return getClaimFromToken(token, claims -> claims.get("clientIp", String.class));
    }

    public String extractJti(String token) {
        return getClaimFromToken(token, Claims::getId);
    }

    @SuppressWarnings("unchecked")
    public List<String> getRolesFromToken(String token) {
        return getClaimFromToken(token, claims -> claims.get("roles", List.class));
    }

    public <T> T getClaimFromToken(String token, Function<Claims, T> claimsResolver) {
        final Claims claims = getAllClaimsFromToken(token);
        return claimsResolver.apply(claims);
    }

    private Claims getAllClaimsFromToken(String token) {
        return Jwts.parser()
                .setSigningKey(jwtSecret)
                .parseClaimsJws(token)
                .getBody();
    }

    private Boolean isTokenExpired(String token) {
        final Date expiration = getExpirationDateFromToken(token);
        return expiration.before(new Date());
    }

    public void revokeToken(String token) {
        String jti = extractJti(token);
        String username = getUsernameFromToken(token);

        // Add to blacklist
        Date expiration = getExpirationDateFromToken(token);
        long ttl = (expiration.getTime() - System.currentTimeMillis()) / 1000;

        if (ttl > 0) {
            redisTemplate.opsForValue().set(
                    "blacklist:" + jti,
                    username,
                    Duration.ofSeconds(ttl)
            );
        }

        // Remove from active tokens
        redisTemplate.delete("token:" + jti);

        log.info("Token revoked for user: {}", username);
    }

    public void revokeAllUserTokens(String username) {
        // This would require storing user-token mapping
        // For now, we'll just log the action
        log.info("All tokens revoked for user: {}", username);

        // In a real implementation, you'd iterate through user's active tokens
        // and revoke them individually
    }

    private boolean isTokenBlacklisted(String token) {
        String jti = extractJti(token);
        return redisTemplate.hasKey("blacklist:" + jti);
    }

    public String refreshAccessToken(String refreshToken) {
        if (isTokenExpired(refreshToken)) {
            throw new JwtException("Refresh token expired");
        }

        String jti = extractJti(refreshToken);
        String username = redisTemplate.opsForValue().get("refresh:" + jti);

        if (username == null) {
            throw new JwtException("Invalid refresh token");
        }

        // Load user details and generate new access token
        UserDetails userDetails = loadUserDetailsByUsername(username);
        String sessionId = getSessionIdFromToken(refreshToken);
        String clientIp = getClientIpFromToken(refreshToken);

        return generateToken(userDetails, sessionId, clientIp);
    }

    private UserDetails loadUserDetailsByUsername(String username) {
        // This should be injected as a dependency
        // For example purposes, we'll create a placeholder
        throw new UnsupportedOperationException("UserDetailsService should be injected");
    }
}

// JWT Request Filter
@Component
@Slf4j
public class JwtRequestFilter extends OncePerRequestFilter {

    private final UserDetailsService userDetailsService;
    private final JwtTokenService jwtTokenService;

    public JwtRequestFilter(UserDetailsService userDetailsService, JwtTokenService jwtTokenService) {
        this.userDetailsService = userDetailsService;
        this.jwtTokenService = jwtTokenService;
    }

    @Override
    protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response,
            FilterChain chain) throws ServletException, IOException {

        final String requestTokenHeader = request.getHeader("Authorization");
        final String clientIp = getClientIpAddress(request);

        String username = null;
        String jwtToken = null;

        // JWT Token is in the form "Bearer token"
        if (requestTokenHeader != null && requestTokenHeader.startsWith("Bearer ")) {
            jwtToken = requestTokenHeader.substring(7);
            try {
                username = jwtTokenService.getUsernameFromToken(jwtToken);
            } catch (IllegalArgumentException e) {
                log.error("Unable to get JWT Token");
            } catch (ExpiredJwtException e) {
                log.warn("JWT Token has expired");
            } catch (JwtException e) {
                log.error("JWT Token validation error: {}", e.getMessage());
            }
        } else {
            log.debug("JWT Token does not begin with Bearer String");
        }

        // Validate token and set authentication
        if (username != null && SecurityContextHolder.getContext().getAuthentication() == null) {
            UserDetails userDetails = this.userDetailsService.loadUserByUsername(username);

            if (jwtTokenService.validateToken(jwtToken, userDetails, clientIp)) {
                UsernamePasswordAuthenticationToken authToken =
                        new UsernamePasswordAuthenticationToken(
                                userDetails, null, userDetails.getAuthorities());
                authToken.setDetails(new WebAuthenticationDetailsSource().buildDetails(request));
                SecurityContextHolder.getContext().setAuthentication(authToken);

                // Set user context for audit logging
                MDC.put("userId", username);
                MDC.put("sessionId", jwtTokenService.getSessionIdFromToken(jwtToken));
                MDC.put("clientIp", clientIp);
            }
        }

        chain.doFilter(request, response);
    }

    private String getClientIpAddress(HttpServletRequest request) {
        String xForwardedForHeader = request.getHeader("X-Forwarded-For");
        if (xForwardedForHeader != null && !xForwardedForHeader.isEmpty()) {
            return xForwardedForHeader.split(",")[0].trim();
        }

        String xRealIpHeader = request.getHeader("X-Real-IP");
        if (xRealIpHeader != null && !xRealIpHeader.isEmpty()) {
            return xRealIpHeader;
        }

        return request.getRemoteAddr();
    }

    @Override
    protected boolean shouldNotFilter(HttpServletRequest request) throws ServletException {
        String path = request.getServletPath();
        return path.startsWith("/api/auth/") ||
               path.startsWith("/actuator/health") ||
               path.startsWith("/api/public/");
    }
}

Monitoring and Observability

Comprehensive Monitoring Stack

Prometheus + Grafana Configuration

Prometheus Configuration:

YAML Configuration

91 lines • 2691 characters

scrape_interval:"15s"string
evaluation_interval:"15s"string
cluster:"fintech-banking"string
environment:"production"string
alertmanager:"9093"string
"fintech_rules.yml"string
"sla_rules.yml"string
job_name:"'fintech-banking-app'"string
targets:"['banking-app:8080']"string
metrics_path:"/actuator/prometheus"string
scrape_interval:"10s"string
scrape_timeout:"5s"string
format:['prometheus']array
job_name:"'postgres-exporter'"string
targets:"['postgres-exporter:9187']"string
scrape_interval:"30s"string
job_name:"'redis-exporter'"string
targets:"['redis-exporter:9121']"string
scrape_interval:"30s"string
job_name:"'kubernetes-apiservers'"string
role:"endpoints"string
scheme:"https"string
ca_file:"/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"string
bearer_token_file:"/var/run/secrets/kubernetes.io/serviceaccount/token"string
source_labels:"[__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]"string
action:"keep"string
regex:"default;kubernetes;https"string
job_name:"'node-exporter'"string
role:"endpoints"string
source_labels:"[__meta_kubernetes_endpoints_name]"string
action:"keep"string
regex:"node-exporter"string
job_name:"'kubernetes-pods'"string
role:"pod"string
source_labels:"[__meta_kubernetes_pod_annotation_prometheus_io_scrape]"string
action:"keep"string
regex:trueboolean
source_labels:"[__meta_kubernetes_pod_annotation_prometheus_io_path]"string
action:"replace"string
target_label:"__metrics_path__"string
regex:"(.+)"string
source_labels:"[__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]"string
action:"replace"string
regex:"([^:]+)(?::\d+)?;(\d+)"string
replacement:"$1:$2"string
target_label:"__address__"string
action:"labelmap"string
regex:"__meta_kubernetes_pod_label_(.+)"string
source_labels:"[__meta_kubernetes_namespace]"string
action:"replace"string
target_label:"kubernetes_namespace"string
source_labels:"[__meta_kubernetes_pod_name]"string
action:"replace"string
target_label:"kubernetes_pod_name"string
Tip: Use search to filter, click nodes to copy values

FinTech-Specific Alerting Rules:

YAML Configuration

193 lines • 6686 characters

Interactive Form Mode ActiveFill out fields below and click Save to generate YAML

Interactive Form Mode

Fill out the template • 23 fields detected

Groups

Annotations

Rules

Tip: Your form data is validated before save/download

Required fields are marked with * and must be filled out. Scoring fields accept values between 1-10.

Tip: Use search to filter, click nodes to copy values

Conclusion

This technical resources chapter provides production-ready templates, configurations, and code samples that FinTech consulting teams can immediately use in their implementations. All resources have been designed with security, compliance, performance, and maintainability as primary considerations.

The resources cover:

  1. Infrastructure: Cloud platform comparisons, Terraform templates, Kubernetes manifests
  2. Database: Schema designs, performance optimization queries, audit implementations
  3. Application: Spring Boot configurations, security implementations, monitoring setup
  4. Monitoring: Comprehensive observability stack with FinTech-specific metrics and alerts

These resources should be customized for specific client requirements while maintaining the security and compliance standards essential for financial services applications.

Remember that these templates represent starting points that must be adapted to specific regulatory requirements, security policies, and business needs of each financial institution. Always conduct thorough security reviews and compliance validation before deploying to production environments.