只支持Hadoop2
配置 hadoop-metrics2-phoenix.properties
# Sample from all the sources every 10 seconds *.period=10 # Write Traces to Phoenix ########################## # ensure that we receive traces on the server phoenix.sink.tracing.class=org.apache.phoenix.trace.PhoenixMetricsSink # Tell the sink where to write the metrics phoenix.sink.tracing.writer-class=org.apache.phoenix.trace.PhoenixTableMetricsWriter # Only handle traces with a context of "tracing" phoenix.sink.tracing.context=tracing
配置 hadoop-metrics2-hbase.properties
# ensure that we receive traces on the server hbase.sink.tracing.class=org.apache.phoenix.trace.PhoenixMetricsSink # Tell the sink where to write the metrics hbase.sink.tracing.writer-class=org.apache.phoenix.trace.PhoenixTableMetricsWriter # Only handle traces with a context of "tracing" hbase.sink.tracing.context=tracing
配置 hbase-site.xml
<configuration>
<property>
<name>phoenix.trace.frequency</name>
<value>always</value>
</property>
</configuration>
<property>
<name>phoenix.trace.statsTableName</name>
<value><your custom tracing table name></value>
</property>
The tracing table is initialized via the ddl:
CREATE TABLE SYSTEM.TRACING_STATS (
trace_id BIGINT NOT NULL,
parent_id BIGINT NOT NULL,
span_id BIGINT NOT NULL,
description VARCHAR,
start_time BIGINT,
end_time BIGINT,
hostname VARCHAR,
tags.count SMALLINT,
annotations.count SMALLINT,
CONSTRAINT pk PRIMARY KEY (trace_id, parent_id, span_id)