[Postgres-xl-general] Question about performance of Postgres-XL vs. PostgreSQL

Andreas Mueller andreas.mueller at asg.com
Wed Feb 25 00:05:15 PST 2015


Hi Mason,

>There is more latency in Postgres-XL with communication between GTM, coordinator and datanodes.
>As you add more concurrency, however, you will get greater total throughput, whereas regular PostgreSQL will peak sooner, contending for resources.

Yes, but I did not expect factor 0.1. In some other context (I think the BDR (Bi-directional replication) project) I read something about the factor 0.6 – 0.7.

>Is it doing it in one transaction or 100,000?
>I assume it is doing each statement individually? If you use the COPY command to load those 100,000 rows, performance will be much faster.

I do it in one executeBatch and there is no time difference between autocommit and one commit after the complete operation. I think this is a single transaction, right?

The test was not about loading a lot of columns. We had a more complex test scenario and I tried this against Postgres-XL and it was much slower than I expected it.
And so I made a simple test to check if only the test scenario has something that performs so much slower on Postgres-XL or if this is the normal “penalty” for Postgres-XL.

Regards,
Andreas


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.postgres-xl.org/pipermail/postgres-xl-general-postgres-xl.org/attachments/20150225/4985d522/attachment.htm>


More information about the postgres-xl-general mailing list