Skip to content

PostgreSQL 动态追踪完全指南

概述

PostgreSQL 动态追踪是一种强大的监控和调试工具,允许开发人员和数据库管理员在数据库服务器运行时实时追踪代码执行的特定点。通过在源代码中预设的探针(probe)或跟踪点,我们可以监控事务处理、查询执行、锁竞争、缓冲区操作等关键数据库活动。

INFO

动态追踪技术最初由 Sun Microsystems 开发,主要应用于 Solaris 系统的 DTrace 工具。目前支持 DTrace 的系统包括 Solaris、macOS、FreeBSD、NetBSD 和 Oracle Linux,Linux 系统可以使用功能等效的 SystemTap。

编译配置与环境准备

启用动态追踪支持

PostgreSQL 默认不编译动态追踪探针,需要在编译时显式启用:

bash
# 配置编译选项启用 DTrace 支持
./configure --enable-dtrace --prefix=/usr/local/pgsql

# 编译和安装
make && make install

验证追踪功能

bash
# 检查 PostgreSQL 是否支持动态追踪
ldd /usr/local/pgsql/bin/postgres | grep -i dtrace

# 查看可用的探针
dtrace -l -n 'postgresql*

:::*' | head -10

输出示例:

   ID   PROVIDER            MODULE                          FUNCTION NAME
18705 postgresql49878     postgres     StartTransactionCommand transaction-start
18755 postgresql49877     postgres     ExecutorStart           query-execute-start
18805 postgresql49876     postgres     ProcessQuery            query-start

内置探针详解

PostgreSQL 提供了丰富的内置探针,覆盖了数据库操作的各个关键环节:

事务生命周期探针

Syntax error in textmermaid version 11.8.0
探针名称参数触发时机业务价值
transaction-startLocalTransactionId新事务开始监控事务并发度,识别长事务
transaction-commitLocalTransactionId事务提交成功统计成功事务数量和频率
transaction-abortLocalTransactionId事务回滚监控事务失败率,排查异常

查询处理探针

Syntax error in textmermaid version 11.8.0

缓冲区管理探针

探针分类探针名称监控内容性能意义
读操作buffer-read-start/done页面读取延迟识别 I/O 瓶颈
写操作buffer-flush-start/done脏页刷新性能优化写入策略
扩展操作buffer-extend-start/done表空间扩展预测存储需求

实际应用场景

场景一:事务性能监控

业务背景: 电商平台在促销期间出现订单处理缓慢,需要分析事务处理性能。

bash
#!/usr/sbin/dtrace -qs
# 文件名: transaction_monitor.d

postgresql$1

:::transaction-start
{
    @start["Transaction Started"] = count();
    self->ts = timestamp;
    self->txn_id = arg0;
}

postgresql$1

:::transaction-commit
/self->ts/
{
    @commit["Transaction Committed"] = count();
    @time["Avg Commit Time (ms)"] = avg((timestamp - self->ts) / 1000000);
    @max_time["Max Commit Time (ms)"] = max((timestamp - self->ts) / 1000000);

    /* 记录长事务 */
    this->duration_ms = (timestamp - self->ts) / 1000000;
    if (this->duration_ms > 1000) {
        printf("Long transaction detected: TXN_ID=%d, Duration=%d ms\n",
               self->txn_id, this->duration_ms);
    }

    self->ts = 0;
    self->txn_id = 0;
}

postgresql$1

:::transaction-abort
{
    @abort["Transaction Aborted"] = count();
    @abort_rate["Abort Rate (%)"] = (100 * count()) / (@start["Transaction Started"] + 1);

    if (self->ts) {
        printf("Transaction aborted: TXN_ID=%d, Duration=%d ms\n",
               self->txn_id, (timestamp - self->ts) / 1000000);
        self->ts = 0;
        self->txn_id = 0;
    }
}

/* 每10秒输出统计信息 */
tick-10s
{
    printf("\n=== Transaction Performance Summary ===\n");
    printa(@start);
    printa(@commit);
    printa(@abort);
    printa(@time);
    printa(@max_time);
    printa(@abort_rate);
    printf("=========================================\n");
}

执行命令:

bash
# 启动监控(替换PID为实际PostgreSQL进程ID)
sudo ./transaction_monitor.d $(pgrep -f "postgres.*writer")

输出分析:

Long transaction detected: TXN_ID=12345, Duration=2500 ms
Transaction aborted: TXN_ID=12347, Duration=150 ms

=== Transaction Performance Summary ===
Transaction Started                    1250
Transaction Committed                  1180
Transaction Aborted                    70
Avg Commit Time (ms)                   45
Max Commit Time (ms)                   2500
Abort Rate (%)                         5.6
=========================================

事务回滚率超过 5% 表明可能存在应用逻辑问题或数据库锁竞争,需要进一步调查。

场景二:查询性能深度分析

业务背景: 数据分析团队反馈某些复杂查询执行缓慢,需要定位是解析、规划还是执行阶段的问题。

bash
#!/usr/sbin/dtrace -qs
# 文件名: query_breakdown.d

string query_text[pid_t];

postgresql$1

:::query-start
{
    query_text[pid] = copyinstr(arg0);
    self->query_start = timestamp;
    printf("Query started: %s\n", query_text[pid]);
}

postgresql$1

:::query-parse-start
{
    self->parse_start = timestamp;
}

postgresql$1

:::query-parse-done
{
    self->parse_time = timestamp - self->parse_start;
    self->parse_start = 0;
}

postgresql$1

:::query-rewrite-start
{
    self->rewrite_start = timestamp;
}

postgresql$1

:::query-rewrite-done
{
    self->rewrite_time = timestamp - self->rewrite_start;
    self->rewrite_start = 0;
}

postgresql$1

:::query-plan-start
{
    self->plan_start = timestamp;
}

postgresql$1

:::query-plan-done
{
    self->plan_time = timestamp - self->plan_start;
    self->plan_start = 0;
}

postgresql$1

:::query-execute-start
{
    self->execute_start = timestamp;
}

postgresql$1

:::query-execute-done
{
    self->execute_time = timestamp - self->execute_start;
    self->execute_start = 0;
}

postgresql$1

:::query-done
/self->query_start/
{
    this->total_time = (timestamp - self->query_start) / 1000000;
    this->parse_ms = self->parse_time / 1000000;
    this->rewrite_ms = self->rewrite_time / 1000000;
    this->plan_ms = self->plan_time / 1000000;
    this->execute_ms = self->execute_time / 1000000;

    printf("\n=== Query Performance Breakdown ===\n");
    printf("Query: %.80s%s\n", query_text[pid],
           strlen(query_text[pid]) > 80 ? "..." : "");
    printf("Total Time: %d ms\n", this->total_time);
    printf("  Parse:    %d ms (%.1f%%)\n", this->parse_ms,
           (this->parse_ms * 100.0) / this->total_time);
    printf("  Rewrite:  %d ms (%.1f%%)\n", this->rewrite_ms,
           (this->rewrite_ms * 100.0) / this->total_time);
    printf("  Plan:     %d ms (%.1f%%)\n", this->plan_ms,
           (this->plan_ms * 100.0) / this->total_time);
    printf("  Execute:  %d ms (%.1f%%)\n", this->execute_ms,
           (this->execute_ms * 100.0) / this->total_time);
    printf("=====================================\n");

    /* 统计慢查询 */
    if (this->total_time > 1000) {
        @slow_queries["Slow Queries (>1s)"] = count();
        @avg_slow_time["Avg Slow Query Time (ms)"] = avg(this->total_time);
    }

    /* 按阶段统计平均时间 */
    @avg_parse["Avg Parse Time (ms)"] = avg(this->parse_ms);
    @avg_plan["Avg Plan Time (ms)"] = avg(this->plan_ms);
    @avg_execute["Avg Execute Time (ms)"] = avg(this->execute_ms);

    /* 清理变量 */
    self->query_start = 0;
    self->parse_time = 0;
    self->rewrite_time = 0;
    self->plan_time = 0;
    self->execute_time = 0;
    query_text[pid] = "";
}

典型输出:

=== Query Performance Breakdown ===
Query: SELECT o.order_id, c.customer_name, SUM(oi.quantity * oi.price) FROM orders o...
Total Time: 2500 ms
  Parse:    5 ms (0.2%)
  Rewrite:  8 ms (0.3%)
  Plan:     187 ms (7.5%)
  Execute:  2300 ms (92.0%)
=====================================

分析结论:

  • 执行阶段占用了 92% 的时间,说明问题主要在查询执行
  • 规划阶段耗时 187ms 相对较长,可能涉及复杂连接或缺少统计信息
  • 需要进一步分析执行计划和索引优化

场景三:锁竞争监控

业务背景: 高并发场景下出现死锁和锁等待,影响系统吞吐量。

bash
#!/usr/sbin/dtrace -qs
# 文件名: lock_monitor.d

postgresql$1

:::lwlock-wait-start
{
    self->lock_wait_start = timestamp;
    self->lock_name = copyinstr(arg0);
    @lock_waits[copyinstr(arg0)] = count();
}

postgresql$1

:::lwlock-wait-done
/self->lock_wait_start/
{
    this->wait_time_ms = (timestamp - self->lock_wait_start) / 1000000;
    @avg_wait_time[self->lock_name] = avg(this->wait_time_ms);
    @max_wait_time[self->lock_name] = max(this->wait_time_ms);

    if (this->wait_time_ms > 100) {
        printf("Long lock wait: %s, %d ms\n", self->lock_name, this->wait_time_ms);
    }

    self->lock_wait_start = 0;
    self->lock_name = "";
}

postgresql$1

:::lock-wait-start
{
    printf("Heavy lock wait started: DB=%d, REL=%d, MODE=%d\n", arg1, arg3, arg5);
    @heavy_lock_waits["Heavy Lock Waits"] = count();
}

postgresql$1

:::deadlock-found
{
    @deadlocks["Deadlocks Detected"] = count();
    printf("DEADLOCK DETECTED at %Y\n", walltimestamp);
}

/* 定期报告统计信息 */
tick-30s
{
    printf("\n=== Lock Performance Report ===\n");
    printf("Top Lock Contention:\n");
    printa("  %s: %@d waits\n", @lock_waits);
    printf("\nAverage Wait Times:\n");
    printa("  %s: %@d ms\n", @avg_wait_time);
    printf("\nMaximum Wait Times:\n");
    printa("  %s: %@d ms\n", @max_wait_time);
    printa(@heavy_lock_waits);
    printa(@deadlocks);
    printf("===============================\n");
}

场景四:I/O 性能监控

业务背景: 数据库响应时间不稳定,需要监控磁盘 I/O 性能。

bash
#!/usr/sbin/dtrace -qs
# 文件名: io_monitor.d

postgresql$1

:::buffer-read-start
{
    self->read_start = timestamp;
    self->read_block = arg1;
    self->read_rel = arg4;
}

postgresql$1

:::buffer-read-done
/self->read_start/
{
    this->read_time_ms = (timestamp - self->read_start) / 1000000;
    this->hit = arg6; /* true if buffer found in pool */

    if (this->hit) {
        @buffer_hits["Buffer Cache Hits"] = count();
    } else {
        @buffer_misses["Buffer Cache Misses"] = count();
        @avg_read_time["Avg Disk Read Time (ms)"] = avg(this->read_time_ms);
        @max_read_time["Max Disk Read Time (ms)"] = max(this->read_time_ms);

        if (this->read_time_ms > 50) {
            printf("Slow disk read: Block=%d, Relation=%d, Time=%d ms\n",
                   self->read_block, self->read_rel, this->read_time_ms);
        }
    }

    self->read_start = 0;
}

postgresql$1

:::buffer-flush-start
{
    self->flush_start = timestamp;
    @flush_count["Buffer Flushes"] = count();
}

postgresql$1

:::buffer-flush-done
/self->flush_start/
{
    this->flush_time_ms = (timestamp - self->flush_start) / 1000000;
    @avg_flush_time["Avg Flush Time (ms)"] = avg(this->flush_time_ms);

    self->flush_start = 0;
}

/* 计算缓存命中率 */
tick-10s
{
    this->hits = @buffer_hits["Buffer Cache Hits"];
    this->misses = @buffer_misses["Buffer Cache Misses"];
    this->total = this->hits + this->misses;

    if (this->total > 0) {
        printf("Buffer Cache Hit Ratio: %.2f%%\n",
               (this->hits * 100.0) / this->total);
    }
}

自定义探针开发

创建业务特定探针

假设我们需要监控自定义的订单处理函数:

步骤 1:定义探针

src/backend/utils/probes.d 中添加:

c
/* 订单处理探针 */
probe order__processing__start(int, int);
probe order__processing__done(int, int, int);
probe order__validation__failed(int, char*);

步骤 2:在源代码中添加探针调用

c
// 在订单处理函数中
void process_order(int order_id, int customer_id) {
    TRACE_POSTGRESQL_ORDER_PROCESSING_START(order_id, customer_id);

    // 业务逻辑
    if (!validate_order(order_id)) {
        TRACE_POSTGRESQL_ORDER_VALIDATION_FAILED(order_id, "Invalid order data");
        return;
    }

    int result = execute_order_processing(order_id);

    TRACE_POSTGRESQL_ORDER_PROCESSING_DONE(order_id, customer_id, result);
}

步骤 3:编写监控脚本

bash
#!/usr/sbin/dtrace -qs
# 文件名: order_monitor.d

postgresql$1

:::order-processing-start
{
    self->order_start = timestamp;
    self->order_id = arg0;
    self->customer_id = arg1;
    @order_starts["Orders Started"] = count();
}

postgresql$1

:::order-processing-done
/self->order_start/
{
    this->processing_time = (timestamp - self->order_start) / 1000000;
    @avg_processing_time["Avg Processing Time (ms)"] = avg(this->processing_time);
    @successful_orders["Successful Orders"] = count();

    printf("Order processed: ID=%d, Customer=%d, Time=%d ms\n",
           self->order_id, self->customer_id, this->processing_time);

    self->order_start = 0;
}

postgresql$1

:::order-validation-failed
{
    @validation_failures["Validation Failures"] = count();
    printf("Order validation failed: ID=%d, Reason=%s\n", arg0, copyinstr(arg1));
}

性能优化最佳实践

探针性能考虑

WARNING

动态追踪会带来性能开销,在生产环境中使用时需要谨慎考虑:

  1. 参数计算开销: 即使未启用追踪,探针参数仍会被计算
  2. 内存使用: 大量探针数据可能消耗额外内存
  3. CPU 影响: 频繁触发的探针可能影响 CPU 性能

优化策略:

c
// 使用条件检查避免不必要的计算
if (TRACE_POSTGRESQL_QUERY_SLOW_ENABLED()) {
    TRACE_POSTGRESQL_QUERY_SLOW(expensive_function_call());
}

// 避免在高频路径中使用复杂探针
if (query_execution_time > SLOW_QUERY_THRESHOLD &&
    TRACE_POSTGRESQL_SLOW_QUERY_ENABLED()) {
    TRACE_POSTGRESQL_SLOW_QUERY(query_text, execution_time);
}

监控脚本优化

bash
# 使用聚合减少输出频率
tick-60s  /* 每分钟输出一次,而非每次事件 */
{
    normalize(@avg_time, 60);  /* 标准化平均值 */
    printa(@avg_time);
    clear(@avg_time);          /* 清理数据避免内存积累 */
}

# 过滤低价值事件
postgresql$1

:::query-start
/strlen(copyinstr(arg0)) > 50/  /* 只监控复杂查询 */
{
    /* 监控逻辑 */
}

与系统监控工具集成

与 Prometheus 集成

bash
#!/bin/bash
# 文件名: postgres_dtrace_exporter.sh

# 运行 DTrace 脚本并输出 Prometheus 格式
dtrace -q -s transaction_monitor.d $(pgrep postgres) | \
while read line; do
    if [[ $line =~ "Transaction Started"[[:space:]]+([0-9]+) ]]; then
        echo "postgres_transactions_started_total ${BASH_REMATCH[1]}"
    elif [[ $line =~ "Avg Commit Time"[[:space:]]+([0-9]+) ]]; then
        echo "postgres_transaction_commit_duration_ms ${BASH_REMATCH[1]}"
    fi
done

集成到监控仪表板

Syntax error in textmermaid version 11.8.0

故障排查实战案例

案例:电商系统性能突降

问题描述: 双 11 期间,订单系统响应时间从平时的 200ms 突增到 2000ms。

排查步骤:

  1. 启动全面监控
bash
# 同时运行多个监控脚本
dtrace -s transaction_monitor.d $(pgrep postgres) &
dtrace -s query_breakdown.d $(pgrep postgres) &
dtrace -s lock_monitor.d $(pgrep postgres) &
dtrace -s io_monitor.d $(pgrep postgres) &
  1. 分析监控数据
=== 关键发现 ===
Lock Performance Report:
  WALWriteLock: 2847 waits
  BufFreelistLock: 1923 waits

Average Wait Times:
  WALWriteLock: 145 ms
  BufFreelistLock: 89 ms

Buffer Cache Hit Ratio: 67.5%  // 正常应该>95%
  1. 定位根本原因

    • WAL 写入锁竞争严重 → wal_buffers 配置过小
    • 缓存命中率低 → shared_buffers 不足
    • 大量缓冲区空闲列表锁等待 → 需要增加缓冲区
  2. 优化措施

sql
-- 调整关键参数
ALTER SYSTEM SET wal_buffers = '64MB';  -- 原来16MB
ALTER SYSTEM SET shared_buffers = '8GB'; -- 原来2GB
ALTER SYSTEM SET effective_cache_size = '24GB';
SELECT pg_reload_conf();
  1. 验证效果
=== 优化后 ===
Average transaction time: 180ms  -- 从2000ms降低
Buffer Cache Hit Ratio: 96.8%    -- 提升到正常水平
WALWriteLock average wait: 12ms   -- 大幅降低

动态追踪帮助我们快速定位到具体的性能瓶颈,避免了盲目调优。在高并发场景下,锁竞争和缓冲区管理往往是关键因素。

总结

PostgreSQL 动态追踪提供了深入洞察数据库内部运行机制的强大能力。通过合理使用内置探针和自定义探针,我们可以:

  • 精确定位性能瓶颈: 从事务、查询、锁、I/O 等多个维度分析问题
  • 实时监控关键指标: 无需停机即可获取详细的性能数据
  • 优化资源配置: 基于实际运行数据做出科学的调优决策
  • 预防性维护: 及早发现潜在问题,避免故障扩大

在生产环境中使用动态追踪时,需要平衡监控需求和性能影响,选择合适的探针和监控频率,确保在获得有价值信息的同时不影响正常业务运行。

Details

参考资源