
If there is no monitoring systems, disk space can be checked with “ df” utility. It helps to begin to work on an issue before it becomes a real problem.
#Postgresql xlog position free#
The amount of free or used disk space always must be kept under control and there has to be a system that would alert you in case of reaching a threshold. The most obvious answer here is monitoring. Now, let’s get back to our mystery, how can we avoid being left without space on our device? Note: in this article, I will use the newer term ‘ pg_wal’ for both ‘ pg_xlog’ and ‘ pg_wal’. In fact, pg_xlog plays a vital role in the database’s life! Furthermore, to prevent situations when these files are accidently deleted pg_xlog has been renamed to pg_wal in Postgres 10.

For various reasons, people often delete files from this directory because its name contains word “log” and one inclined to think “These are just logs, nothing will happen if I delete them”. pg_xlog is a directory where Postgres keeps its transaction log (also known as XLOG or WAL) which is used for recovery purposes. Let me take a little detour here, for those who are unfamiliar with pg_xlog.
#Postgresql xlog position archive#
From these two log messages, we concluded that free space was completely exhausted due to failed archive command – Postgres could not write its transaction log and ultimately crashed.īut how we can prevent that from happening again and what to do in case it does happen?

That was the answer to the first question.
#Postgresql xlog position code#
Looking at other log messages we saw another FATAL record:įATAL: archive command failed with exit code 131ĭETAIL: The failed archive command was: /opt/utils/pgdb/wal-archiver pg_xlog/000000010000000000000003 LOG: terminating any other active server processes LOG: WAL writer process (PID 9023) was terminated by signal 6: Aborted PANIC: could not write to file “pg_xlog/xlogtemp.9023”: No space left on device We looked into postgresql.log to see what were the last lines before crash: Client provided us with crash logs and said that pg_xlog directory was suspiciously big while there were only around 2 thousand files with 16MB size each. The value being the same doesn't mean that the replication fails.The first thing that we’ve done is what any good detective would do – started looking for clues. If there's no activity in the primary server, the replication activity doesn't occur in the secondary server and the value remains the same. The time that the last log was replayed in the secondary server. It's probably the most important value to determine the health of the replication process. The difference in bytes between the primary server and secondary server. Last replication log replayed in the secondary server. It helps to determine by how much the replication process is behind. The difference in bytes between the primary server and last received log on the secondary server. To get this value, the script executes a remote query to the primary server. Logs can be received, but not replayed yet.Ĭurrent replication log on the primary server. Last replication log received in the secondary server. It returns True if the replication is paused. Returns True if the server is set as a secondary server and the replication is properly configured. As a result, the PostgreSQL replication process is slow. This situation is potentially possible when the Data Exchange Layer (DXL) architecture has nodes with slow connectivity. Increasing this value helps to prevent the primary server from removing a WAL segment still needed by the standby, in which case the replication connection closes. Specifies the minimum number of past log file segments kept in the pg_xlog directory.


You can tune the parameters below, as needed to address unusual delays: The primary or secondary configurations might have hot-standby capabilities that enable scalability for read operations. It allows the TIE Server PostgreSQL database to use native streaming replication in a primary or secondary configuration. We recommend that you do not change the configuration.
