- Code: Select all
...
pg_dump: dumping contents of table crm_department_output
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR: out of memory
DETAIL: Failed on request of size 536870912.
pg_dump: The command was: COPY public.crm_department_output (crm_department_output_id, document_type, document_format_type, crm_department_id, ocument_date, origin_type, origin_id, owner_id, log_creation_user, log_creation_dt, log_modification_user, log_modification_dt, document, checkout_dt, checkout_user, checkin_dt, name, document_sequence_integer, amount_excl_vat, amount_incl_vat, glb_currency_id, currency_amount_excl_vat, currency_amount_incl_vat, document_reference, crm_contactperson_id, mailed_dt, faxed_dt, cost_price, printed_dt, start_date, end_date) TO stdout;
pg_dump: *** aborted because of error
The table consist of more than 3000 rows. Rows contain a bytea-column, this column holds documents up to approximately 130MB.
When running this COPY-query in pgAdmin redirecting to a file instead of STDOUT it produces the same error. But when adding WITH BINARY to the query it runs okay! This is why we think this is not the result of corruption of data but rather a memory problem. When dealing with memory problems in postgres there are three settings we can adjust: shared_buffers, work_mem and maintenance_work_mem. Currently these are set to 32MB, 1MB and 16MB.
Tonight we are going to change shared_buffers to 256MB and restart the database-server. Would this be a good idea and could this solve the problem or are we changing the wrong value?