While performing Oracle to Postgres migrations, we may come across many interesting conversion challenges. Sometimes, the same logic might produce a different behavior between Oracle and Postgres. Minor mistakes in such conversions may create a lot of work while fixing the application functionality. One of such mistakes is while handling trailing zeroes with numeric datatype in PostgreSQL, during the conversion of a NUMBER datatype in Oracle to NUMERIC datatype in PostgreSQL. In this blog post, we shall discuss the ways to handle trailing zeros with numeric datatype in Postgres.

Precision and Scale
Precision is the number of digits of a number including the digits after the decimal point. Whereas the scale is the number of digits after the decimal point. For 123.45, precision is 5 and the scale is 2.
NUMBER datatype in Oracle
A Number datatype in Oracle can store positive and negative integers with a precision and a scale. In Oracle, the Precision can range between 0 to 38 digits while the scale ranges between -84 to 127 digits. Oracle also allows a negative scale but we shall discuss that in detail in our next article.
In the following example, the precision is 20 and the scale is 10.
NUMBER (20,10) -> Precision is 20 and Scale is 10
NUMERIC datatype in PostgreSQL
A Numeric datatype in PostgreSQL can store positive and negative integers with some precision and scale. The precision can range from 0 to 131072 digits whereas the scale can range from 0 to 16383 digits.
Behavior of Number Datatype in Oracle
Let us consider an example of a table with NUMBER datatype in Oracle.
CREATE TABLE test_number(i NUMBER(20,10));
We shall now insert some data using the following commands.
INSERT INTO test_number VALUES(10.0500);
Now, let us try selecting this data and see if there are any trailing zeros. As we see in the output, we see no trailing zeroes.
SQL> SELECT * FROM test_number;
I
----------
10.05
Behavior of Numeric Datatype in PostgreSQL
Let us consider an example of a table with NUMERIC datatype in PostgreSQL.
CREATE TABLE test_numeric(i NUMERIC(20,10));
We shall now insert the same data inserted in Oracle using the following command.
INSERT INTO test_numeric VALUES(10.0500);
Now, let us try selecting this data and see if there are any trailing zeros.
postgres=# SELECT * FROM test_numeric;
i
---------------
10.0500000000
(1 row)
In the above output, we could notice that there are trailing zeros up to the limit of the scale specified to the column. In this example, column i is created with the datatype as NUMERIC(20,10) so the scale is 10. For this reason, we have 10 digits after the decimal point.
Work-arounds for fixing trailing zeroes in PostgreSQL
There are multiple solutions to avoid trailing zeroes depending on the PostgreSQL version we deploy.
Until PostgreSQL 12
Until PostgreSQL 12, we do not have any direct functions that could help with a simple conversion. For this reason, we had to leverage a different approach of type casting the column to a text data type followed by using a trim function.
See the following example where we could use the trim() function.
postgres=# SELECT trim(trailing '0' from 10.00500::text)::numeric;
rtrim
--------
10.005
(1 row)
PostgreSQL 13 and above
Starting from PostgreSQL 13, we could simply leverage the trim_scale() function that could fix the issues with trailing zeros.
Let us see the following example where we have used trim_scale() to trim the trailing zeroes without using any type casting.
postgres=# SELECT trim_scale(10.00500);
trim_scale
------------
10.005
(1 row)
However, the above workaround require some changes in the application. The embedded SQL code or the SQL generated by an ORM must handle these data types with some changes in the application.
Is there a way to make it seamless for application? Let us discuss that in the following section.
Solution to fix trailing with no changes in application
In order to avoid any application level changes to handle trailing zeroes, we could use the Numeric data type in Postgres without any precision or scale. Consider the following example where we have the same table created earlier without any precision or scale defined.
CREATE TABLE test_numeric(i numeric);
INSERT INTO test_numeric VALUES(10.0500);
postgres=# SELECT * FROM test_number;
i
---------
10.0500
(1 row)
In the above output, we could notice that output displays the exact amount of zeros that were inserted by the application with no additional zeroes. We also see that the output is also not trimmed while Oracle trims the zeroes as seen in the first example. In Oracle, if a user inserts 0.0500, upon select, it trims it to 0.05. If there is a specific reason for which these additional zeros have been inserted, then, it is very important for the application to not have these zeros trimmed. But, if the requirement is to trim the zeroes similar to Oracle, then, we could leverage the workaround discussed earlier.
Implications of using a numeric data type without a precision and scale
It is an important question about what are the implications of using just a plain numeric data type and what will be its precision or scale. While there is not much of a confusion, it just stays within the default limits of precision and scale. This means that the value can either be up to 131072 digits (precision) or the scale can be between 0 to 16383 digits but with no restriction on the scale or precision. If the application is aware that there is no guaranteed restriction on scale or precision, then, there is no concern in utilizing this solution.
Conclusion
We have so far seen how to handle trailing zeroes with numeric data type in PostgreSQL. In our next blog articles, we shall discuss about the conversion of a NUMBER datatype in Oracle with negative scale to PostgreSQL. In the future, we shall discuss the implications of a numeric datatype of PostgreSQL in terms of performance. Please subscribe to our newsletters to get notified about our articles and latest updates on PostgreSQL and migrations. If you are looking for any services around Migrations to PostgreSQL or Consulting services for PostgreSQL, please fill the following form or Contact Us by sending an email to sales@migops.com
Very interesting observation, Jagadeesh. Also, since precision and scale are managed by constraints, it is a slight bit more efficient in cases where the constraints don’t help anyway.