Обслуживание баз postgresql в windows скриптами командной строки

ANALYZE (БЕЗ VACUUM!) — каждый день (Например, утром, еще до начала работы пользователей в 1С).REINDEX – Делаем раз в неделю для профилактики (Не обязательно, если индексы не «распухают» и делаете часто Dump / Restore).VACUUM  + FULL + ANALYZE – Раз в неделю (лучше всего делать в конце рабочей недели например: пятница). Так выполняя эти простые действия … Обслуживание базы данных 1C postgreSQL (windows) cmd (bat) скрипты Читать полностью »

ANALYZE (БЕЗ VACUUM!) — каждый день (Например, утром, еще до начала работы пользователей в 1С).
REINDEX – Делаем раз в неделю для профилактики (Не обязательно, если индексы не «распухают» и делаете часто Dump / Restore).
VACUUM  + FULL + ANALYZE – Раз в неделю (лучше всего делать в конце рабочей недели например: пятница).

Так выполняя эти простые действия по обслуживанию баз данных в PostgreSQL, Вы обеспечите стабильную работу пользователей в 1С, улучшите производительность и исключите возможные ошибки.


Каждый день

@echo off
for /f “delims=.” %%i in (‘wmic.exe OS get LocalDateTime ^| find “.”‘) do set sDateTime=%%i
set f_year=%sDateTime:~0,4%
set f_month=%sDateTime:~4,2%
set f_day=%sDateTime:~6,2%
set f_hour=%sDateTime:~8,2%
set f_min=%sDateTime:~10,2%
set f_sec=%sDateTime:~12,2%

set f_name_log=F:support_logspostgresql_%f_year%.%f_month%.%f_day%

@echo on
echo start: %f_year%.%f_month%.%f_day% – %f_hour%:%f_min%:%f_sec% >>%f_name_log%_vacuum.log

SET PGPASSWORD=**********

“C:Program FilesPostgreSQL13.3-2.1Cbinvacuumdb.exe” –dbname KAMIN –host 127.0.0.1 –port 5432 –username postgres –echo –analyze >>%f_name_log%_vacuum.log

echo end: %f_year%.%f_month%.%f_day% – %f_hour%:%f_min%:%f_sec% >>%f_name_log%_vacuum.log


Раз в неделю

set f_name_log=Z:support_logsKAMIN_%f_year%.%f_month%.%f_day%

@echo on
echo start: %f_year%.%f_month%.%f_day% – %f_hour%:%f_min%:%f_sec% >>%f_name_log%_vacuum.log

SET PGPASSWORD=**********

C:Program FilesPostgresPro12binvacuumdb.exe” –dbname KAMIN –host 127.0.0.1 –port 5432 –username postgres –echo –full –analyze >>%f_name_log%_vacuum.log

echo end: %f_year%.%f_month%.%f_day% – %f_hour%:%f_min%:%f_sec% >>%f_name_log%_vacuum.log

echo start: %f_year%.%f_month%.%f_day% – %f_hour%:%f_min%:%f_sec% >>%f_name_log%_reindex.log

C:Program FilesPostgresPro12binreindexdb.exe” –dbname KAMIN –host 127.0.0.1 –port 5432 –username postgres –echo >>%f_name_log%_reindex.log

echo end: %f_year%.%f_month%.%f_day% – %f_hour%:%f_min%:%f_sec% >>%f_name_log%_reindex.log


Z:support_logsKAMIN – путь для хранения логов

********** – пароль к базе данных

C:Program FilesPostgresPro12 – путь до установленной postgres

KAMIN – имя обслуживаемой базы

Скрипт обслуживающий все базы (добавить в планировщик)

pg_vacuum.bat

REM СКРИПТ ОБСЛУЖИВАЮЩИЙ ВСЕ БАЗЫ В КЛАСТЕРЕ POSTGRESQL
CLS
ECHO OFF
CHCP 1251
REM Установка переменных окружения
SET PGBIN=C:Program FilesPostgreSQL10.5-24.1Cbin
SET PGUSER=postgres
SET PGPASSWORD=pass
REM Смена диска и переход в папку из которой запущен bat-файл
%~d0
CD %~dp0
REM Формирование имени файла резервной копии и файла-отчета
SET DATETIME=%DATE:~6,4%-%DATE:~3,2%-%DATE:~0,2%_%TIME:~0,2%-%TIME:~3,2%-%TIME:~6,2%
SET LOGFILE=%DATETIME%.log
SET LOGPATH=»Vacuum%LOGFILE%»
REM Обслуживание баз кластера
IF NOT EXIST Vacuum MD Vacuum
CALL «%PGBIN%vacuumdb» -zfa -U %PGUSER% 2>>%LOGPATH%
REM Анализ кода завершения
IF NOT %ERRORLEVEL%==0 GOTO Error
GOTO Successfull
REM В случае ошибки делается соответствующая запись в журнале
:Error
MSG * «Ошибка при Vacuum. Смотрите vacuum.log.»
ECHO %DATETIME% Ошибки при Vacuum. Смотрите отчет %LOGFILE%. >> vacuum.log
GOTO End
REM В случае удачного Vacuum просто делается запись в журнал
:Successfull
ECHO %DATETIME% Успешное проведение Vacuum >> vacuum.log
GOTO End
:End

Сохранить.

There are a few routine maintenance chores that must be
performed on a regular basis to keep a PostgreSQL server running smoothly. The
tasks discussed here are repetitive in nature and can easily be
automated using standard Unix tools such as cron scripts. But it is the database
administrator’s responsibility to set up appropriate scripts, and
to check that they execute successfully.

One obvious maintenance task is creation of backup copies of
the data on a regular schedule. Without a recent backup, you have
no chance of recovery after a catastrophe (disk failure, fire,
mistakenly dropping a critical table, etc.). The backup and
recovery mechanisms available in PostgreSQL are discussed at length in
Chapter 22.

The other main category of maintenance task is periodic
«vacuuming» of the database. This
activity is discussed in Section 21.1.

Something else that might need periodic attention is log file
management. This is discussed in Section 21.3.

PostgreSQL is low-maintenance
compared to some other database management systems. Nonetheless,
appropriate attention to these tasks will go far towards ensuring
a pleasant and productive experience with the system.

21.1. Routine Vacuuming

PostgreSQL‘s VACUUM command must be run on a regular basis
for several reasons:

  1. To recover disk space occupied by updated or deleted
    rows.

  2. To update data statistics used by the PostgreSQL query planner.

  3. To protect against loss of very old data due to
    transaction ID wraparound.

The frequency and scope of the VACUUM operations performed for each of these
reasons will vary depending on the needs of each site.
Therefore, database administrators must understand these issues
and develop an appropriate maintenance strategy. This section
concentrates on explaining the high-level issues; for details
about command syntax and so on, see the VACUUM reference page.

Beginning in PostgreSQL
7.2, the standard form of VACUUM can
run in parallel with normal database operations (selects,
inserts, updates, deletes, but not changes to table
definitions). Routine vacuuming is therefore not nearly as
intrusive as it was in prior releases, and it is not as
critical to try to schedule it at low-usage times of day.

Beginning in PostgreSQL
8.0, there are configuration parameters that can be adjusted to
further reduce the performance impact of background vacuuming.
See Section
16.4.3.4.

21.1.1. Recovering disk
space

In normal PostgreSQL
operation, an UPDATE or DELETE of a row does not immediately remove
the old version of the row. This approach is necessary to
gain the benefits of multiversion concurrency control (see
Chapter 12): the row version must not
be deleted while it is still potentially visible to other
transactions. But eventually, an outdated or deleted row
version is no longer of interest to any transaction. The
space it occupies must be reclaimed for reuse by new rows, to
avoid infinite growth of disk space requirements. This is
done by running VACUUM.

Clearly, a table that receives frequent updates or deletes
will need to be vacuumed more often than tables that are
seldom updated. It may be useful to set up periodic
cron tasks that VACUUM only selected tables, skipping tables
that are known not to change often. This is only likely to be
helpful if you have both large heavily-updated tables and
large seldom-updated tables — the extra cost of vacuuming a
small table isn’t enough to be worth worrying about.

There are two variants of the VACUUM command. The first form, known as
«lazy vacuum» or just VACUUM, marks expired data in tables and
indexes for future reuse; it does not attempt to reclaim the
space used by this expired data immediately. Therefore, the
table file is not shortened, and any unused space in the file
is not returned to the operating system. This variant of
VACUUM can be run concurrently with
normal database operations.

The second form is the VACUUM
FULL command. This uses a more aggressive algorithm for
reclaiming the space consumed by expired row versions. Any
space that is freed by VACUUM FULL
is immediately returned to the operating system.
Unfortunately, this variant of the VACUUM command acquires an exclusive lock on
each table while VACUUM FULL is
processing it. Therefore, frequently using VACUUM FULL can have an extremely negative
effect on the performance of concurrent database queries.

The standard form of VACUUM is
best used with the goal of maintaining a fairly level
steady-state usage of disk space. If you need to return disk
space to the operating system you can use VACUUM FULL — but what’s the point of
releasing disk space that will only have to be allocated
again soon? Moderately frequent standard VACUUM runs are a better approach than
infrequent VACUUM FULL runs for
maintaining heavily-updated tables.

Recommended practice for most sites is to schedule a
database-wide VACUUM once a day at a
low-usage time of day, supplemented by more frequent
vacuuming of heavily-updated tables if necessary. (Some
installations with an extremely high rate of data
modification VACUUM busy tables as
often as once every few minutes.) If you have multiple
databases in a cluster, don’t forget to VACUUM each one; the program vacuumdb may be helpful.

Tip: The contrib/pg_autovacuum program can be
useful for automating high-frequency vacuuming
operations.

VACUUM FULL is recommended for
cases where you know you have deleted the majority of rows in
a table, so that the steady-state size of the table can be
shrunk substantially with VACUUM
FULL’s more aggressive approach. Use plain VACUUM, not VACUUM
FULL, for routine vacuuming for space recovery.

If you have a table whose contents are deleted on a
periodic basis, consider doing it with TRUNCATE rather than using DELETE followed by VACUUM. TRUNCATE
removes the entire content of the table immediately, without
requiring a subsequent VACUUM or
VACUUM FULL to reclaim the
now-unused disk space.

21.1.2. Updating planner
statistics

The PostgreSQL query
planner relies on statistical information about the contents
of tables in order to generate good plans for queries. These
statistics are gathered by the ANALYZE command, which can be invoked by
itself or as an optional step in VACUUM. It is important to have reasonably
accurate statistics, otherwise poor choices of plans may
degrade database performance.

As with vacuuming for space recovery, frequent updates of
statistics are more useful for heavily-updated tables than
for seldom-updated ones. But even for a heavily-updated
table, there may be no need for statistics updates if the
statistical distribution of the data is not changing much. A
simple rule of thumb is to think about how much the minimum
and maximum values of the columns in the table change. For
example, a timestamp column that
contains the time of row update will have a
constantly-increasing maximum value as rows are added and
updated; such a column will probably need more frequent
statistics updates than, say, a column containing URLs for
pages accessed on a website. The URL column may receive
changes just as often, but the statistical distribution of
its values probably changes relatively slowly.

It is possible to run ANALYZE on
specific tables and even just specific columns of a table, so
the flexibility exists to update some statistics more
frequently than others if your application requires it. In
practice, however, the usefulness of this feature is
doubtful. Beginning in PostgreSQL 7.2, ANALYZE is a fairly fast operation even on
large tables, because it uses a statistical random sampling
of the rows of a table rather than reading every single row.
So it’s probably much simpler to just run it over the whole
database every so often.

Tip: Although per-column tweaking of ANALYZE frequency may not be very
productive, you may well find it worthwhile to do
per-column adjustment of the level of detail of the
statistics collected by ANALYZE.
Columns that are heavily used in WHERE clauses and have highly irregular
data distributions may require a finer-grain data
histogram than other columns. See ALTER TABLE SET STATISTICS.

Recommended practice for most sites is to schedule a
database-wide ANALYZE once a day at
a low-usage time of day; this can usefully be combined with a
nightly VACUUM. However, sites with
relatively slowly changing table statistics may find that
this is overkill, and that less-frequent ANALYZE runs are sufficient.

21.1.3. Preventing transaction ID
wraparound failures

PostgreSQL‘s MVCC
transaction semantics depend on being able to compare
transaction ID (XID)
numbers: a row version with an insertion XID greater than the
current transaction’s XID is «in the
future»
and should not be visible to the current
transaction. But since transaction IDs have limited size (32
bits at this writing) a cluster that runs for a long time
(more than 4 billion transactions) will suffer transaction ID wraparound: the XID counter
wraps around to zero, and all of a sudden transactions that
were in the past appear to be in the future — which means
their outputs become invisible. In short, catastrophic data
loss. (Actually the data is still there, but that’s cold
comfort if you can’t get at it.)

Prior to PostgreSQL 7.2,
the only defense against XID wraparound was to re-initdb at least every 4 billion transactions.
This of course was not very satisfactory for high-traffic
sites, so a better solution has been devised. The new
approach allows a server to remain up indefinitely, without
initdb or any sort of restart. The
price is this maintenance requirement: every table in the database must be
vacuumed at least once every billion transactions
.

In practice this isn’t an onerous requirement, but since
the consequences of failing to meet it can be complete data
loss (not just wasted disk space or slow performance), some
special provisions have been made to help database
administrators keep track of the time since the last
VACUUM. The remainder of this
section gives the details.

The new approach to XID comparison distinguishes two
special XIDs, numbers 1 and 2 (BootstrapXID and FrozenXID). These two XIDs are always
considered older than every normal XID. Normal XIDs (those
greater than 2) are compared using modulo-231
arithmetic. This means that for every normal XID, there are
two billion XIDs that are «older»
and two billion that are «newer»;
another way to say it is that the normal XID space is
circular with no endpoint. Therefore, once a row version has
been created with a particular normal XID, the row version
will appear to be «in the past»
for the next two billion transactions, no matter which normal
XID we are talking about. If the row version still exists
after more than two billion transactions, it will suddenly
appear to be in the future. To prevent data loss, old row
versions must be reassigned the XID FrozenXID sometime before they reach the
two-billion-transactions-old mark. Once they are assigned
this special XID, they will appear to be «in the past» to all normal transactions
regardless of wraparound issues, and so such row versions
will be good until deleted, no matter how long that is. This
reassignment of XID is handled by VACUUM.

VACUUM’s normal policy is to
reassign FrozenXID to any row
version with a normal XID more than one billion transactions
in the past. This policy preserves the original insertion XID
until it is not likely to be of interest anymore. (In fact,
most row versions will probably live and die without ever
being «frozen».) With this policy,
the maximum safe interval between VACUUM runs on any table is exactly one
billion transactions: if you wait longer, it’s possible that
a row version that was not quite old enough to be reassigned
last time is now more than two billion transactions old and
has wrapped around into the future — i.e., is lost to you.
(Of course, it’ll reappear after another two billion
transactions, but that’s no help.)

Since periodic VACUUM runs are
needed anyway for the reasons described earlier, it’s
unlikely that any table would not be vacuumed for as long as
a billion transactions. But to help administrators ensure
this constraint is met, VACUUM
stores transaction ID statistics in the system table
pg_database. In particular, the
datfrozenxid column of a database’s
pg_database row is updated at the
completion of any database-wide VACUUM operation (i.e., VACUUM that does not name a specific table).
The value stored in this field is the freeze cutoff XID that
was used by that VACUUM command. All
normal XIDs older than this cutoff XID are guaranteed to have
been replaced by FrozenXID within
that database. A convenient way to examine this information
is to execute the query

SELECT datname, age(datfrozenxid) FROM pg_database;

The age column measures the
number of transactions from the cutoff XID to the current
transaction’s XID.

With the standard freezing policy, the age column will start at one billion for a
freshly-vacuumed database. When the age approaches two billion, the database must
be vacuumed again to avoid risk of wraparound failures.
Recommended practice is to VACUUM
each database at least once every half-a-billion (500
million) transactions, so as to provide plenty of safety
margin. To help meet this rule, each database-wide VACUUM automatically delivers a warning if
there are any pg_database entries
showing an age of more than 1.5
billion transactions, for example:

play=# VACUUM;
WARNING:  some databases have not been vacuumed in 1613770184 transactions
HINT:  Better vacuum them within 533713463 transactions, or you may have a wraparound failure.
VACUUM

VACUUM with the FREEZE option uses a more aggressive freezing
policy: row versions are frozen if they are old enough to be
considered good by all open transactions. In particular, if a
VACUUM FREEZE is performed in an
otherwise-idle database, it is guaranteed that all row versions in that
database will be frozen. Hence, as long as the database is
not modified in any way, it will not need subsequent
vacuuming to avoid transaction ID wraparound problems. This
technique is used by initdb to
prepare the template0 database. It
should also be used to prepare any user-created databases
that are to be marked datallowconn =
false in pg_database, since there isn’t any convenient
way to VACUUM a database that you
can’t connect to. Note that VACUUM’s
automatic warning message about unvacuumed databases will
ignore pg_database entries with
datallowconn = false, so as to avoid giving false warnings
about these databases; therefore it’s up to you to ensure
that such databases are frozen correctly.

Warning

To be sure of safety against transaction
wraparound, it is necessary to vacuum every table, including
system catalogs, in every database at least
once every billion transactions. We have seen data
loss situations caused by people deciding that they
only needed to vacuum their active user tables,
rather than issuing database-wide vacuum commands.
That will appear to work fine … for a while.

There are a few routine maintenance chores that must be
performed on a regular basis to keep a PostgreSQL server running smoothly. The
tasks discussed here are repetitive in nature and can easily be
automated using standard Unix tools such as cron scripts. But it is the database
administrator’s responsibility to set up appropriate scripts, and
to check that they execute successfully.

One obvious maintenance task is creation of backup copies of
the data on a regular schedule. Without a recent backup, you have
no chance of recovery after a catastrophe (disk failure, fire,
mistakenly dropping a critical table, etc.). The backup and
recovery mechanisms available in PostgreSQL are discussed at length in
Chapter 22.

The other main category of maintenance task is periodic
«vacuuming» of the database. This
activity is discussed in Section 21.1.

Something else that might need periodic attention is log file
management. This is discussed in Section 21.3.

PostgreSQL is low-maintenance
compared to some other database management systems. Nonetheless,
appropriate attention to these tasks will go far towards ensuring
a pleasant and productive experience with the system.

21.1. Routine Vacuuming

PostgreSQL‘s VACUUM command must be run on a regular basis
for several reasons:

  1. To recover disk space occupied by updated or deleted
    rows.

  2. To update data statistics used by the PostgreSQL query planner.

  3. To protect against loss of very old data due to
    transaction ID wraparound.

The frequency and scope of the VACUUM operations performed for each of these
reasons will vary depending on the needs of each site.
Therefore, database administrators must understand these issues
and develop an appropriate maintenance strategy. This section
concentrates on explaining the high-level issues; for details
about command syntax and so on, see the VACUUM reference page.

Beginning in PostgreSQL
7.2, the standard form of VACUUM can
run in parallel with normal database operations (selects,
inserts, updates, deletes, but not changes to table
definitions). Routine vacuuming is therefore not nearly as
intrusive as it was in prior releases, and it is not as
critical to try to schedule it at low-usage times of day.

Beginning in PostgreSQL
8.0, there are configuration parameters that can be adjusted to
further reduce the performance impact of background vacuuming.
See Section
16.4.3.4.

21.1.1. Recovering disk
space

In normal PostgreSQL
operation, an UPDATE or DELETE of a row does not immediately remove
the old version of the row. This approach is necessary to
gain the benefits of multiversion concurrency control (see
Chapter 12): the row version must not
be deleted while it is still potentially visible to other
transactions. But eventually, an outdated or deleted row
version is no longer of interest to any transaction. The
space it occupies must be reclaimed for reuse by new rows, to
avoid infinite growth of disk space requirements. This is
done by running VACUUM.

Clearly, a table that receives frequent updates or deletes
will need to be vacuumed more often than tables that are
seldom updated. It may be useful to set up periodic
cron tasks that VACUUM only selected tables, skipping tables
that are known not to change often. This is only likely to be
helpful if you have both large heavily-updated tables and
large seldom-updated tables — the extra cost of vacuuming a
small table isn’t enough to be worth worrying about.

There are two variants of the VACUUM command. The first form, known as
«lazy vacuum» or just VACUUM, marks expired data in tables and
indexes for future reuse; it does not attempt to reclaim the
space used by this expired data immediately. Therefore, the
table file is not shortened, and any unused space in the file
is not returned to the operating system. This variant of
VACUUM can be run concurrently with
normal database operations.

The second form is the VACUUM
FULL command. This uses a more aggressive algorithm for
reclaiming the space consumed by expired row versions. Any
space that is freed by VACUUM FULL
is immediately returned to the operating system.
Unfortunately, this variant of the VACUUM command acquires an exclusive lock on
each table while VACUUM FULL is
processing it. Therefore, frequently using VACUUM FULL can have an extremely negative
effect on the performance of concurrent database queries.

The standard form of VACUUM is
best used with the goal of maintaining a fairly level
steady-state usage of disk space. If you need to return disk
space to the operating system you can use VACUUM FULL — but what’s the point of
releasing disk space that will only have to be allocated
again soon? Moderately frequent standard VACUUM runs are a better approach than
infrequent VACUUM FULL runs for
maintaining heavily-updated tables.

Recommended practice for most sites is to schedule a
database-wide VACUUM once a day at a
low-usage time of day, supplemented by more frequent
vacuuming of heavily-updated tables if necessary. (Some
installations with an extremely high rate of data
modification VACUUM busy tables as
often as once every few minutes.) If you have multiple
databases in a cluster, don’t forget to VACUUM each one; the program vacuumdb may be helpful.

Tip: The contrib/pg_autovacuum program can be
useful for automating high-frequency vacuuming
operations.

VACUUM FULL is recommended for
cases where you know you have deleted the majority of rows in
a table, so that the steady-state size of the table can be
shrunk substantially with VACUUM
FULL’s more aggressive approach. Use plain VACUUM, not VACUUM
FULL, for routine vacuuming for space recovery.

If you have a table whose contents are deleted on a
periodic basis, consider doing it with TRUNCATE rather than using DELETE followed by VACUUM. TRUNCATE
removes the entire content of the table immediately, without
requiring a subsequent VACUUM or
VACUUM FULL to reclaim the
now-unused disk space.

21.1.2. Updating planner
statistics

The PostgreSQL query
planner relies on statistical information about the contents
of tables in order to generate good plans for queries. These
statistics are gathered by the ANALYZE command, which can be invoked by
itself or as an optional step in VACUUM. It is important to have reasonably
accurate statistics, otherwise poor choices of plans may
degrade database performance.

As with vacuuming for space recovery, frequent updates of
statistics are more useful for heavily-updated tables than
for seldom-updated ones. But even for a heavily-updated
table, there may be no need for statistics updates if the
statistical distribution of the data is not changing much. A
simple rule of thumb is to think about how much the minimum
and maximum values of the columns in the table change. For
example, a timestamp column that
contains the time of row update will have a
constantly-increasing maximum value as rows are added and
updated; such a column will probably need more frequent
statistics updates than, say, a column containing URLs for
pages accessed on a website. The URL column may receive
changes just as often, but the statistical distribution of
its values probably changes relatively slowly.

It is possible to run ANALYZE on
specific tables and even just specific columns of a table, so
the flexibility exists to update some statistics more
frequently than others if your application requires it. In
practice, however, the usefulness of this feature is
doubtful. Beginning in PostgreSQL 7.2, ANALYZE is a fairly fast operation even on
large tables, because it uses a statistical random sampling
of the rows of a table rather than reading every single row.
So it’s probably much simpler to just run it over the whole
database every so often.

Tip: Although per-column tweaking of ANALYZE frequency may not be very
productive, you may well find it worthwhile to do
per-column adjustment of the level of detail of the
statistics collected by ANALYZE.
Columns that are heavily used in WHERE clauses and have highly irregular
data distributions may require a finer-grain data
histogram than other columns. See ALTER TABLE SET STATISTICS.

Recommended practice for most sites is to schedule a
database-wide ANALYZE once a day at
a low-usage time of day; this can usefully be combined with a
nightly VACUUM. However, sites with
relatively slowly changing table statistics may find that
this is overkill, and that less-frequent ANALYZE runs are sufficient.

21.1.3. Preventing transaction ID
wraparound failures

PostgreSQL‘s MVCC
transaction semantics depend on being able to compare
transaction ID (XID)
numbers: a row version with an insertion XID greater than the
current transaction’s XID is «in the
future»
and should not be visible to the current
transaction. But since transaction IDs have limited size (32
bits at this writing) a cluster that runs for a long time
(more than 4 billion transactions) will suffer transaction ID wraparound: the XID counter
wraps around to zero, and all of a sudden transactions that
were in the past appear to be in the future — which means
their outputs become invisible. In short, catastrophic data
loss. (Actually the data is still there, but that’s cold
comfort if you can’t get at it.)

Prior to PostgreSQL 7.2,
the only defense against XID wraparound was to re-initdb at least every 4 billion transactions.
This of course was not very satisfactory for high-traffic
sites, so a better solution has been devised. The new
approach allows a server to remain up indefinitely, without
initdb or any sort of restart. The
price is this maintenance requirement: every table in the database must be
vacuumed at least once every billion transactions
.

In practice this isn’t an onerous requirement, but since
the consequences of failing to meet it can be complete data
loss (not just wasted disk space or slow performance), some
special provisions have been made to help database
administrators keep track of the time since the last
VACUUM. The remainder of this
section gives the details.

The new approach to XID comparison distinguishes two
special XIDs, numbers 1 and 2 (BootstrapXID and FrozenXID). These two XIDs are always
considered older than every normal XID. Normal XIDs (those
greater than 2) are compared using modulo-231
arithmetic. This means that for every normal XID, there are
two billion XIDs that are «older»
and two billion that are «newer»;
another way to say it is that the normal XID space is
circular with no endpoint. Therefore, once a row version has
been created with a particular normal XID, the row version
will appear to be «in the past»
for the next two billion transactions, no matter which normal
XID we are talking about. If the row version still exists
after more than two billion transactions, it will suddenly
appear to be in the future. To prevent data loss, old row
versions must be reassigned the XID FrozenXID sometime before they reach the
two-billion-transactions-old mark. Once they are assigned
this special XID, they will appear to be «in the past» to all normal transactions
regardless of wraparound issues, and so such row versions
will be good until deleted, no matter how long that is. This
reassignment of XID is handled by VACUUM.

VACUUM’s normal policy is to
reassign FrozenXID to any row
version with a normal XID more than one billion transactions
in the past. This policy preserves the original insertion XID
until it is not likely to be of interest anymore. (In fact,
most row versions will probably live and die without ever
being «frozen».) With this policy,
the maximum safe interval between VACUUM runs on any table is exactly one
billion transactions: if you wait longer, it’s possible that
a row version that was not quite old enough to be reassigned
last time is now more than two billion transactions old and
has wrapped around into the future — i.e., is lost to you.
(Of course, it’ll reappear after another two billion
transactions, but that’s no help.)

Since periodic VACUUM runs are
needed anyway for the reasons described earlier, it’s
unlikely that any table would not be vacuumed for as long as
a billion transactions. But to help administrators ensure
this constraint is met, VACUUM
stores transaction ID statistics in the system table
pg_database. In particular, the
datfrozenxid column of a database’s
pg_database row is updated at the
completion of any database-wide VACUUM operation (i.e., VACUUM that does not name a specific table).
The value stored in this field is the freeze cutoff XID that
was used by that VACUUM command. All
normal XIDs older than this cutoff XID are guaranteed to have
been replaced by FrozenXID within
that database. A convenient way to examine this information
is to execute the query

SELECT datname, age(datfrozenxid) FROM pg_database;

The age column measures the
number of transactions from the cutoff XID to the current
transaction’s XID.

With the standard freezing policy, the age column will start at one billion for a
freshly-vacuumed database. When the age approaches two billion, the database must
be vacuumed again to avoid risk of wraparound failures.
Recommended practice is to VACUUM
each database at least once every half-a-billion (500
million) transactions, so as to provide plenty of safety
margin. To help meet this rule, each database-wide VACUUM automatically delivers a warning if
there are any pg_database entries
showing an age of more than 1.5
billion transactions, for example:

play=# VACUUM;
WARNING:  some databases have not been vacuumed in 1613770184 transactions
HINT:  Better vacuum them within 533713463 transactions, or you may have a wraparound failure.
VACUUM

VACUUM with the FREEZE option uses a more aggressive freezing
policy: row versions are frozen if they are old enough to be
considered good by all open transactions. In particular, if a
VACUUM FREEZE is performed in an
otherwise-idle database, it is guaranteed that all row versions in that
database will be frozen. Hence, as long as the database is
not modified in any way, it will not need subsequent
vacuuming to avoid transaction ID wraparound problems. This
technique is used by initdb to
prepare the template0 database. It
should also be used to prepare any user-created databases
that are to be marked datallowconn =
false in pg_database, since there isn’t any convenient
way to VACUUM a database that you
can’t connect to. Note that VACUUM’s
automatic warning message about unvacuumed databases will
ignore pg_database entries with
datallowconn = false, so as to avoid giving false warnings
about these databases; therefore it’s up to you to ensure
that such databases are frozen correctly.

Warning

To be sure of safety against transaction
wraparound, it is necessary to vacuum every table, including
system catalogs, in every database at least
once every billion transactions. We have seen data
loss situations caused by people deciding that they
only needed to vacuum their active user tables,
rather than issuing database-wide vacuum commands.
That will appear to work fine … for a while.

Здравствуйте.

Хочу сделать автоматическое обслуживание базы 1С.

Сохранить в .cmd файл и добавить в планировщик задач.

Не могу найти рабочий скрипт для этого.

Вот есть такой экземпляр, но какую базу он будет обслуживать, все что есть в postgresql?

Поделитесь пожалуйста рабочим вариантом. Слышал что нужно его выполнять:

ANALYZE (без VACUUM)    каждый день утром

REINDEX    1 раз в неделю

VACUUM+FULL+ANALYZE    1 раз в неделю (на выходных)

echo off

for /f «delims=.» %%i in (‘wmic.exe OS get LocalDateTime ^| find «.»‘) do set sDateTime=%%i

set f_year=%sDateTime:~0,4%

set f_month=%sDateTime:~4,2%

set f_day=%sDateTime:~6,2%

set f_hour=%sDateTime:~8,2%

set f_min=%sDateTime:~10,2%

set f_sec=%sDateTime:~12,2%

set f_name_log=H:logssqlpostgresql_%f_year%.%f_month%.%f_day%

echo on

echo start: %f_year%.%f_month%.%f_day% — %f_hour%:%f_min%:%f_sec% >>%f_name_log%_vacuum.log

SET PGPASSWORD=*****

«C:Program FilesPostgreSQL 1C9.6binvacuumdb.exe» —dbname postgres —host 127.0.0.1 —port 5432 —username postgres —echo —full —analyze >>%f_name_log%_vacuum.log

echo end: %f_year%.%f_month%.%f_day% — %f_hour%:%f_min%:%f_sec% >>%f_name_log%_vacuum.log

echo start: %f_year%.%f_month%.%f_day% — %f_hour%:%f_min%:%f_sec% >>%f_name_log%_reindex.log

«C:Program FilesPostgreSQL 1C9.6binreindexdb.exe» —dbname postgres —host 127.0.0.1 —port 5432 —username postgres —echo >>%f_name_log%_reindex.log

echo end: %f_year%.%f_month%.%f_day% — %f_hour%:%f_min%:%f_sec% >>%f_name_log%_reindex.log

Понравилась статья? Поделить с друзьями:
  • Образ для восстановления загрузки windows 10
  • Обслуживание windows 10 для быстродействия компьютера
  • Обслуживание ssd дисков в windows 10 программа
  • Обслуживание postgresql для 1с windows через cmd
  • Обс студио скачать бесплатно на русском языке для windows 10