Correctifs appliqués

Michael Meskes pushed:

Tom Lane pushed:

  • Absorb -D USE 32BIT TIME T switch from Perl, if relevant. Commit 3c163a7fc's original choice to ignore all #define symbols whose names begin with underscore turns out to be too simplistic. On Windows, some Perl installations are built with -D USE 32BIT TIME T, and we must absorb that or we get the wrong result for sizeof(PerlInterpreter). This effectively re-reverts commit ef58b87df, which injected that symbol in a hacky way, making it apply to all of Postgres not just PL/Perl. More significantly, it did so on *all* 32-bit Windows builds, even when the Perl build to be used did not select this option; so that it fails to work properly with some newer Perl builds. By making this change, we would be introducing an ABI break in 32-bit Windows builds; but fortunately we have not used type time t in any exported Postgres APIs in a long time. So it should be OK, both for PL/Perl itself and for third-party extensions, if an extension library is built with a different USE 32BIT TIME T setting than the core code. Patch by me, based on research by Ashutosh Sharma and Robert Haas. Back-patch to all supported branches, as commit 3c163a7fc was. Discussion: https://postgr.es/m/CANFyU97OVQ3+Mzfmt3MhuUm5NwPU=-FtbNH5Eb7nZL9ua8=rcA@mail.gmail.com https://git.postgresql.org/pg/commitdiff/5a5c2feca3fd858e70ea348822595547e6fa6c15
  • Handle elog(FATAL) during ROLLBACK more robustly. Stress testing by Andreas Seltenreich disclosed longstanding problems that occur if a FATAL exit (e.g. due to receipt of SIGTERM) occurs while we are trying to execute a ROLLBACK of an already-failed transaction. In such a case, xact.c is in TBLOCK ABORT state, so that AbortOutOfAnyTransaction would skip AbortTransaction and go straight to CleanupTransaction. This led to an assert failure in an assert-enabled build (due to the ROLLBACK's portal still having a cleanup hook) or without assertions, to a FATAL exit complaining about "cannot drop active portal". The latter's not disastrous, perhaps, but it's messy enough to want to improve it. We don't really want to run all of AbortTransaction in this code path. The minimum required to clean up the open portal safely is to do AtAbort Memory and AtAbort Portals. It seems like a good idea to do AtAbort Memory unconditionally, to be entirely sure that we are starting with a safe CurrentMemoryContext. That means that if the main loop in AbortOutOfAnyTransaction does nothing, we need an extra step at the bottom to restore CurrentMemoryContext = TopMemoryContext, which I chose to do by invoking AtCleanup Memory. This'll result in calling AtCleanup Memory twice in many of the paths through this function, but that seems harmless and reasonably inexpensive. The original motivation for the assertion in AtCleanup Portals was that we wanted to be sure that any user-defined code executed as a consequence of the cleanup hook runs during AbortTransaction not CleanupTransaction. That still seems like a valid concern, and now that we've seen one case of the assertion firing --- which means that exactly that would have happened in a production build --- let's replace the Assert with a runtime check. If we see the cleanup hook still set, we'll emit a WARNING and just drop the hook unexecuted. This has been like this a long time, so back-patch to all supported branches. Discussion: https://postgr.es/m/877ey7bmun.fsf@ansel.ydns.eu https://git.postgresql.org/pg/commitdiff/5b6289c1e07dc45f09c3169a189e60d2fcaec2b3
  • Final pgindent + perltidy run for v10. https://git.postgresql.org/pg/commitdiff/21d304dfedb4f26d0d6587d9ac39b1b5c499bb55
  • Stamp HEAD as 11devel. Note that we no longer require any manual adjustments to shared-library minor version numbers, cf commit a3bce17ef. So this should be everything. https://git.postgresql.org/pg/commitdiff/9f14dc393bd441dd9251bea2a5a3ad7f889b03c5
  • Distinguish wait-for-connection from wait-for-write-ready on Windows. The API for WaitLatch and friends followed the Unix convention in which waiting for a socket connection to complete is identical to waiting for the socket to accept a write. While Windows provides a select(2) emulation that agrees with that, the native WaitForMultipleObjects API treats them as quite different --- and for some bizarre reason, it will report a not-yet-connected socket as write-ready. libpq itself has so far escaped dealing with this because it waits with select(), but in libpqwalreceiver.c we want to wait using WaitLatchOrSocket. The semantics mismatch resulted in replication connection failures on Windows, but only for remote connections (apparently, localhost connections complete immediately, or at least too fast for anyone to have noticed the problem in single-machine testing). To fix, introduce an additional WL SOCKET CONNECTED wait flag for WaitLatchOrSocket, which is identical to WL SOCKET WRITEABLE on non-Windows, but results in waiting for FD CONNECT events on Windows. Ideally, we would also distinguish the two conditions in the API for PQconnectPoll(), but changing that API at this point seems infeasible. Instead, cheat by checking for PQstatus() == CONNECTION STARTED to determine that we're still waiting for the connection to complete. (This is a cheat mainly because CONNECTION STARTED is documented as an internal state rather than something callers should rely on. Perhaps we ought to change the documentation ... but this patch doesn't.) Per reports from Jobin Augustine and Igor Neyman. Back-patch to v10 where commit 1e8a85009 exposed this longstanding shortcoming. Andres Freund, minor fix and some code review/beautification by me Discussion: https://postgr.es/m/CAHBggj8g2T+ZDcACZ2FmzX9CTxkWjKBsHd6NkYB4i9Ojf6K1Fw@mail.gmail.com https://git.postgresql.org/pg/commitdiff/f3a4d7e7c290ba630ba0e6e6f009ac27cd3feb03
  • Simplify plpgsql's check for simple expressions. plpgsql wants to recognize expressions that it can execute directly via ExecEvalExpr() instead of going through the full SPI machinery. Originally the test for this consisted of recursively groveling through the post-planning expression tree to see if it contained only nodes that plpgsql recognized as safe. That was a major maintenance headache, since it required updating plpgsql every time we added any kind of expression node. It was also kind of expensive, so over time we added various pre-planning checks to try to short-circuit having to do that. Robert Haas pointed out that as of the SRF-processing changes in v10, particularly the addition of Query.hasTargetSRFs, there really isn't any reason to make the recursive scan at all: the initial checks cover everything we really care about. We do have to make sure that those checks agree with what inline function() considers, so that inlining of a function that formerly wasn't inlined can't cause an expression considered simple to become non-simple. Hence, delete the recursive function exec simple check node(), and tweak those other tests to more exactly agree with inline function(). Adjust some comments and function naming to match. Discussion: https://postgr.es/m/CA+TgmoZGZpwdEV2FQWaVxA qZXsQE1DAS5Fu8fwxXDNvfndiUQ@mail.gmail.com https://git.postgresql.org/pg/commitdiff/00418c61244138bd8ac2de58076a1d0dd4f539f3
  • Avoid out-of-memory in a hash join with many duplicate inner keys. The executor is capable of splitting buckets during a hash join if too much memory is being used by a small number of buckets. However, this only helps if a bucket's population is actually divisible; if all the hash keys are alike, the tuples still end up in the same new bucket. This can result in an OOM failure if there are enough inner keys with identical hash values. The planner's cost estimates will bias it against choosing a hash join in such situations, but not by so much that it will never do so. To mitigate the OOM hazard, explicitly estimate the hash bucket space needed by just the inner side's most common value, and if that would exceed work mem then add disable cost to the hash cost estimate. This approach doesn't account for the possibility that two or more common values would share the same hash value. On the other hand, work mem is normally a fairly conservative bound, so that eating two or more times that much space is probably not going to kill us. If we have no stats about the inner side, ignore this consideration. There was some discussion of making a conservative assumption, but that would effectively result in disabling hash join whenever we lack stats, which seems like an overreaction given how seldom the problem manifests in the field. Per a complaint from David Hinkle. Although this could be viewed as a bug fix, the lack of similar complaints weighs against back- patching; indeed we waited for v11 because it seemed already rather late in the v10 cycle to be making plan choice changes like this one. Discussion: https://postgr.es/m/32013.1487271761@sss.pgh.pa.us https://git.postgresql.org/pg/commitdiff/4867d7f62f7363909526d84d8fa1e9cf434cffc6
  • Make simpler-simple-expressions code cope with a Gather plan. Commit 00418c612 expected that the plan generated for a simple-expression query would always be a plain Result node. However, if force parallel mode is on, the planner might stick a Gather atop that. Cope by looking through the Gather. For safety, assert that the Gather's tlist is trivial. Per buildfarm. Discussion: https://postgr.es/m/23425.1502822098@sss.pgh.pa.us https://git.postgresql.org/pg/commitdiff/b73f1b5c29e0ace5a281bc13ce09dea30e1b66de
  • Make the planner assume that the entries in a VALUES list are distinct. Previously, if we had to estimate the number of distinct values in a VALUES column, we fell back on the default behavior used whenever we lack statistics, which effectively is that there are Min(# of entries, 200) distinct values. This can be very badly off with a large VALUES list, as noted by Jeff Janes. We could consider actually running an ANALYZE-like scan on the VALUES, but that seems unduly expensive, and anyway it could not deliver reliable info if the entries are not all constants. What seems like a better choice is to assume that the values are all distinct. This will sometimes be just as wrong as the old code, but it seems more likely to be more nearly right in many common cases. Also, it is more consistent with what happens in some related cases, for example WHERE x = ANY(ARRAY[1,2,3,...,n]) and WHERE x = ANY(VALUES (1),(2),(3),...,(n)) now are estimated similarly. This was discussed some time ago, but consensus was it'd be better to slip it in at the start of a development cycle not near the end. (It should've gone into v10, really, but I forgot about it.) Discussion: https://postgr.es/m/CAMkU=1xHkyPa8VQgGcCNg3RMFFvVxUdOpus1gKcFuvVi0w6Acg@mail.gmail.com https://git.postgresql.org/pg/commitdiff/2b74303637edc09cf692fbfab3fd93a5e47ccabf
  • Extend the default rules file for contrib/unaccent with Vietnamese letters. Improve generate unaccent rules.py to handle composed characters whose base is another composed character rather than a plain letter. The net effect of this is to add a bunch of multi-accented Vietnamese characters to unaccent.rules. Original complaint from Kha Nguyen, diagnosis of the script's shortcoming by Thomas Munro. Dang Minh Huong and Michael Paquier Discussion: https://postgr.es/m/CALo3sF6EC8cy1F2JUz=GRf5h4LMUJTaG3qpdoiLrNbWEXL-tRg@mail.gmail.com https://git.postgresql.org/pg/commitdiff/ec0a69e49bf41a37b5c2d6f6be66d8abae00ee05
  • Add missing "static" marker. Per pademelon. https://git.postgresql.org/pg/commitdiff/963af96920fabf5fd7ee28ecc96521f371c13a4b
  • Further tweaks to compiler flags for PL/Perl on Windows. It now emerges that we can only rely on Perl to tell us we must use -D USE 32BIT TIME T if it's Perl 5.13.4 or later. For older versions, revert to our previous practice of assuming we need that symbol in all 32-bit Windows builds. This is not ideal, but inquiring into which compiler version Perl was built with seems far too fragile. In any case, we had not previously had complaints about these old Perl versions, so let's assume this is Good Enough. (It's still better than the situation ante commit 5a5c2feca, in that at least the effects are confined to PL/Perl rather than the whole PG build.) Back-patch to all supported versions, like 5a5c2feca and predecessors. Discussion: https://postgr.es/m/CANFyU97OVQ3+Mzfmt3MhuUm5NwPU=-FtbNH5Eb7nZL9ua8=rcA@mail.gmail.com https://git.postgresql.org/pg/commitdiff/b5178c5d08ca59e30f9d9428fa6fdb2741794e65
  • Fix ExecReScanGatherMerge. Not surprisingly, since it'd never ever been tested, ExecReScanGatherMerge didn't work. Fix it, and add a regression test case to exercise it. Amit Kapila Discussion: https://postgr.es/m/CAA4eK1JkByysFJNh9M349u nNjqETuEnY y1VUc kJiU0bxtaQ@mail.gmail.com https://git.postgresql.org/pg/commitdiff/a2b70c89ca1a5fcf6181d3c777d82e7b83d2de1b
  • Temporarily revert test case from a2b70c89ca1a5fcf6181d3c777d82e7b83d2de1b. That code patch was good as far as it went, but the associated test case has exposed fundamental brain damage in the parallel scan mechanism, which is going to take nontrivial work to correct. In the interests of getting the buildfarm back to green so that unrelated work can proceed, let's temporarily remove the test case. https://git.postgresql.org/pg/commitdiff/a20aac890a89e6f88e841dedbbfa8d9d5f7309fc
  • Fix possible core dump in parallel restore when using a TOC list. Commit 3eb9a5e7c unintentionally introduced an ordering dependency into restore toc entries prefork(). The existing coding of reduce dependencies() contains a check to skip moving a TOC entry to the ready list if it wasn't initially in the pending list. This used to suffice to prevent reduce dependencies() from trying to move anything into the ready list during restore toc entries prefork(), because the pending list stayed empty throughout that phase; but it no longer does. The problem doesn't manifest unless the TOC has been reordered by SortTocFromFile, which is how I missed it in testing. To fix, just add a test for ready list == NULL, converting the call with NULL from a poor man's sanity check into an explicit command not to touch TOC items' list membership. Clarify some of the comments around this; in particular, note the primary purpose of the check for pending list membership, which is to ensure that we can't try to restore the same item twice, in case a TOC list forces it to be restored before its dependency count goes to zero. Per report from Fabrízio de Royes Mello. Back-patch to 9.3, like the previous commit. Discussion: https://postgr.es/m/CAFcNs+pjuv0JL x4+=71TPUPjdLHOXA4YfT32myj OrrZb4ohA@mail.gmail.com https://git.postgresql.org/pg/commitdiff/b1c2d76a2fcef812af0be3343082414d401909c8

Peter Eisentraut pushed:

Andres Freund pushed:

Robert Haas pushed:

Álvaro Herrera pushed:

  • Fix error handling path in autovacuum launcher. The original code (since 00e6a16d01) was assuming aborting the transaction in autovacuum launcher was sufficient to release all resources, but in reality the launcher runs quite a lot of code out of any transactions. Re-introduce individual cleanup calls to make abort more robust. Reported-by: Robert Haas Discussion: https://postgr.es/m/CA+TgmobQVbz4K +RSmiM9HeRKpy3vS5xnbkL95gSEnWijzprKQ@mail.gmail.com https://git.postgresql.org/pg/commitdiff/d9a622cee162775ae42aa5c1ac592760d0d777d9
  • Simplify autovacuum work-item implementation. The initial implementation of autovacuum work-items used a dynamic shared memory area (DSA). However, it's argued that dynamic shared memory is not portable enough, so we cannot rely on it being supported everywhere; at the same time, autovacuum work-items are now a critical part of the server, so it's not acceptable that they don't work in the cases where dynamic shared memory is disabled. Therefore, let's fall back to a simpler implementation of work-items that just uses autovacuum's main shared memory segment for storage. Discussion: https://postgr.es/m/CA+TgmobQVbz4K +RSmiM9HeRKpy3vS5xnbkL95gSEnWijzprKQ@mail.gmail.com https://git.postgresql.org/pg/commitdiff/31ae1638ce35c23979f9bcbb92c6bb51744dbccb

Heikki Linnakangas pushed:

Correctifs en attente

Amit Kapila sent in another revision of a patch to parallelize queries containing initplans.

Michaël Paquier sent in a patch to simplify ACL handling for large objects and removal of superuser() checks.

Mark Rofail sent in two more revisions of a patch to implement foreign key arrays.

Masahiko Sawada sent in another revision of a patch to implement block level parallel vacuum.

Fabien COELHO sent in another revision of a patch to add more functions and operators to pgbench.

Marko Tiikkaja sent in a patch to implement INSERT .. ON CONFLICT DO SELECT [FOR ..]

Masahiko Sawada sent in a patch to enable logging the explicit relation name in VACUUM VERBOSE logs.

Álvaro Herrera sent in a patch to release lwlocks in autovacuum launcher error handling path.

Peter Eisentraut sent in a patch to document TR 35 collation options for ICU.

Tom Lane and Robert Haas traded commnet patches to clarify force parallel.

Robert Haas sent in two more revisions of a patch to implement auto prewarm.

Etsuro Fujita sent in a patch to ensure that stats for triggers on partitioned tables are shown in EXPLAIN ANALYZE.

Nathan Bossart sent in another revision of a patch to allow users to specify multiple tables in VACUUM commands.

Tobias Bussmann sent in a patch to make \gx does work when \set FETCH COUNT n is done.

Thomas Munro sent in two more revisions of a patch to enable sharing record typmods among backends.

Masahiko Sawada sent in another revision of a patch to move extension locks out of the heavyweight lock manager.

Masahiko Sawada sent in another revision of a patch to pgbench to allow skipping creating primary keys after initialization.

Konstantin Knizhnik sent in a patch to do some secondary index access optimizations.

Pavel Stěhule sent in another revision of a patch to check the PSQL_PAGER environment variable in psql.

Pavel Stěhule sent in another revision of a patch to allow sorting describe (\d) commands in psql.

Heikki Linnakangas sent in a patch to fix shm_toc alignment.

Peter Eisentraut sent in a patch to to use stdbool.h.

Ildus Kurbangaliev sent in another revision of a patch to remove the 1MB size limit in tsvector.

Robert Haas sent in two revisions of a patch to allow hash functions to have a second, optional support function.

Adrien NAYRAT sent in a patch to fix the docs for the multivariate histograms and MCV lists patch.

Beena Emerson sent in two more revisions of a patch to enable default partitions for declarative range partitions.

Oliver Ford sent in two more revisions of a patch to fix number skipping in to_number.

Antonin Houska sent in another revision of a patch to implement aggregate pushdown.

Peter Eisentraut sent in a patch to do some assorted cleanup including removing some dead code from contrib/fuzzystrmatch, our own definition of NULL, unnecessary parentheses in return statements, unnecessary casts in contrib/cube, and the endof macro, and dropping excessive dereferencing of function pointers.

Etsuro Fujita sent in another revision of a patch to fix a postgres_fdw bug.

Dilip Kumar sent in another revision of a patch to improve bitmap costing for lossy pages.

Masahiko Sawada sent in another revision of a patch to improve messaging during logical replication worker startup, split the SetSubscriptionRelState function into two, allow syscache access to subscriptions in database-less processes, and improve locking for subscriptions and subscribed relations.

Jeevan Ladhe sent in another revision of a patch to add support for default partition in declarative partitioning.

Masahiko Sawada sent in a patch to add a documentation warning about FIRST.

Konstantin Knizhnik and Douglas Doole traded patches to enable passing LIMIT to a FDW.

Daniel Gustafsson sent in another revision of a patch to allow running SSL tests against different binaries, add support for Apple Secure Transport SSL library, document Secure Transport, and fix SSL tests for connstrings with spaces.

David Steele sent in two revisions of a patch to update the low-level backup documentation to match actual behavior.

Vinayak Pokale sent in a patch to add WHENEVER statement DO CONTINUE support for ECPG.

Michael Banck sent in a patch to add an option to create a replication slot in pg_basebackup if not yet present.

Thomas Munro sent in another revision of a patch to implement parallel hashing.

Vesa-Matti J Kari sent in a patch to add HISTIGNORE for psql.

Simon Riggs sent in a patch to exclude special values in recovery target time.v1.patch.

Pavel Stěhule sent in four more revisions of a patch to fix some possible encoding issues with libxml2 functions.

Fabien COELHO sent in another revision of a patch to add pgbench TAP tests.