PostgreSql function slows down view


I am rewriting Sybase view to postgresql and encountered such problem.
Is there any way to improve performance of view with functions inside?
For example we have complex function

create function func1(pid integer)
returns char(20)
as $$
... do stuf
return 'aaaa';
end; $$ language plpgsql;

And create view with function

create view view_client
select code, name, func1(pid) from client;

Such view performs terrible slow, unless "func1" is removed from it. Postgresql probalby executes function for every row even if column is not used?
count(*) from view with 2M rows takes 2+ minutes, without "func1" it takes ~10sec.

Best Answer

As an answer,

What exactly does the function do? Can you define it as stable or even better as immutable? – a_horse_with_no_name 12 mins ago

From the docs

IMMUTABLE / STABLE / VOLATILE These attributes inform the query optimizer about the behavior of the function. At most one choice can be specified. If none of these appear, VOLATILE is the default assumption.

  • IMMUTABLE indicates that the function cannot modify the database and always returns the same result when given the same argument values; that is, it does not do database lookups or otherwise use information not directly present in its argument list. If this option is given, any call of the function with all-constant arguments can be immediately replaced with the function value.

  • STABLE indicates that the function cannot modify the database, and that within a single table scan it will consistently return the same result for the same argument values, but that its result could change across SQL statements. This is the appropriate selection for functions whose results depend on database lookups, parameter variables (such as the current time zone), etc. (It is inappropriate for AFTER triggers that wish to query rows modified by the current command.) Also note that the current_timestamp family of functions qualify as stable, since their values do not change within a transaction.

  • VOLATILE indicates that the function value can change even within a single table scan, so no optimizations can be made. Relatively few database functions are volatile in this sense; some examples are random(), currval(), timeofday(). But note that any function that has side-effects must be classified volatile, even if its result is quite predictable, to prevent calls from being optimized away; an example is setval().

For additional details see Section 36.6

The real question is essentially, can you optimize away f(x)

  • Over everything (mathematical purity)
  • Over the same scan
  • Never (uses IO or system calls)

A somewhat important note is that even in production applications you can sometimes lie and say IMMUTABLE and it's perfectly fine. Just be aware of the consequences.