[postgres-xl-general] Truncating Tables >1M records takes a long time

Tomas Vondra tomas.vondra at 2ndquadrant.com
Wed Nov 22 15:12:18 PST 2017


Hi,

On 11/22/2017 11:32 PM, Matthew Tamayo-Rios wrote:
> I'm using PGXL 9.5r1.6, based on PG 9.5.8 (Postgres-XL 9.5r1.6)
> 
> Also, it looks like something is broken with delete in general-- I tried
> deleting a single row and it is hanging indefinitely. GTM seems to be
> fine and all other operations (select, insert, update) seem to be
> working fine. In fact, after doing some more troubleshooting (detailed
> below) it seems that deleting even a single row from the very large
> tables hangs indefinitely making the only way to clear them dropping the
> entire database (which works).
> 
> Delete hangs even on a single id for large tables.
> 
>     delete from ids where id = '30095031-49e0-44eb-87a9-98b640773ef2';
> 
> 
> Log from coordinator just has:
> 
>     STATEMENT:  delete from ids where id =
>     30095031-49e0-44eb-87a9-98b640773ef2; 
> 
> 
> Data node has no log entries;
> 
> Successfully created a test table and deleted a record from it:
> 
>     openlattice=> create table test( id uuid, c1 int, c2 int, PRIMARY
>     KEY (id) ) distribute by hash(id);
>     CREATE TABLE
>     => insert into test VALUES( '30095031-49e0-44eb-87a9-98b640773ef2',
>     21, 23 );
>     INSERT 0 1
>     => insert into test VALUES( '30095031-49e0-55eb-87a9-98b640773ef2',
>     21, 23 );
>     INSERT 0 1
>     => delete from test where id = '30095031-49e0-44eb-87a9-98b640773ef2';
>     DELETE 1
> 
> 
> I also checked locks and I am seeing a bunch of prepared statements
> blocking each other, but no inserts have been done in 24 hours.
> 
> blocked_pid | blocked_user | blocking_pid | blocking_user |             
>       blocked_statement                     |         
> current_statement_in_blocking_process
> -------------+--------------+--------------+---------------+----------------------------------------------------------+----------------------------------------------------------
>        95377 |                |       95434 |                   | INSERT
> INTO ids VALUES ($1, $2, $3, $4)                  | INSERT INTO ids
> VALUES ($1, $2, $3, $4)
>        95377 |                |       95405 |                   | INSERT
> INTO ids VALUES ($1, $2, $3, $4)                  | INSERT INTO ids
> VALUES ($1, $2, $3, $4)
>        95377 |                |       95378 |                   | INSERT
> INTO ids VALUES ($1, $2, $3, $4)                  | INSERT INTO ids
> VALUES ($1, $2, $3, $4)
> 
> I omitted user above. So it looks like tables with heavy write usage are
> getting blocked by prepared statements holding onto locks for too long.
> Any ideas on what would cause that? These tables are being copied from
> cassandra via a short lived ETL process in preparation for a migration
> so their shouldn't be locks being held.
> 

So you're doing the inserts using prepared statements? I wonder if that
might be preventing restriction to subset of nodes (if we don't know the
value, we can't compute the hash and restrict the list of nodes).

Also, what do the backends holding the locks do? Are they waiting for
some other lock (in that case, what does the backend holding that lock
do), or are they stuck for some other reason?

What does pg_stat_activity say?

Can you attach gdb to the stuck backends and get the backtrace? That is,
something like 'gdb -p $PID' and then 'bt'. I suspect it might be the
GTM issue I reported a while back (see the stack at the beginning):

http://lists.postgres-xl.org/pipermail/postgres-xl-developers-postgres-xl.org/2017-November/000571.html

Another idea is that perhaps there are some old prepared transaction.
Can you check pg_prepared_xacts on the nodes?

regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



More information about the postgres-xl-general mailing list