January 14th, 2025

PostgreSQL Anonymizer

PostgreSQL Anonymizer is an extension that masks PII in databases, allowing developers to define rules, supports various methods, and enhances GDPR compliance while maintaining data confidentiality during testing.

Read original articleLink Icon
CuriositySkepticismEnthusiasm
PostgreSQL Anonymizer

PostgreSQL Anonymizer is an extension designed to mask or replace personally identifiable information (PII) and sensitive data in PostgreSQL databases. It employs a declarative approach, allowing developers to define masking rules directly within the database schema using PostgreSQL Data Definition Language (DDL). The extension supports five masking methods: Anonymous Dumps, Static Masking, Dynamic Masking, Masking Views, and Masking Data Wrappers, each suitable for different contexts. The primary aim is to ensure data is masked at the database level to minimize exposure risks. Additionally, it offers various masking functions, including randomization and custom functions, along with detection functions to identify columns needing anonymization. The quick start guide outlines steps to launch the extension using Docker, create a database, and set up masking rules for specific user roles. Success stories highlight its effectiveness in reinforcing GDPR compliance and maintaining data confidentiality during testing. Users have praised the extension for its ability to implement complex masking rules without sacrificing functionality.

- PostgreSQL Anonymizer masks PII and sensitive data in PostgreSQL databases.

- It allows developers to define masking rules directly in the database schema.

- The extension supports multiple masking methods tailored for different use cases.

- It includes various masking functions and detection capabilities for identifying sensitive data.

- Users report improved GDPR compliance and data confidentiality during testing processes.

AI: What people are saying
The comments on the PostgreSQL Anonymizer highlight various perspectives and concerns regarding data anonymization tools.
  • Several users are developing or have developed similar tools, emphasizing the need for user-friendly interfaces and configurations for non-technical users.
  • There are discussions about the limitations of current anonymization methods, with some experts arguing that they may not meet GDPR standards and could be more accurately described as pseudonymization.
  • Concerns are raised about the integration of anonymization tools with existing database frameworks, particularly regarding schema changes and migration issues.
  • Users express interest in automatic identification of PII and the need for default masking of new columns to enhance data security.
  • Some commenters caution against the risk of inadvertently anonymizing production data, highlighting the importance of careful implementation.
Link Icon 14 comments
By @gkbrk - about 15 hours
Clickhouse has something similar called clickhouse-obfuscator [1]. It even works offline with data dumps so you can quickly prepare and send somewhat realistic example data to others.

According to its --help output, it is designed to retain the following properties of data:

- cardinalities of values (number of distinct values) for every column and for every tuple of columns;

- conditional cardinalities: number of distinct values of one column under condition on value of another column;

- probability distributions of absolute value of integers; sign of signed integers; exponent and sign for floats;

- probability distributions of length of strings;

- probability of zero values of numbers; empty strings and arrays, NULLs;

- data compression ratio when compressed with LZ77 and entropy family of codecs;

- continuity (magnitude of difference) of time values across table; continuity of floating point values.

- date component of DateTime values;

- UTF-8 validity of string values;

- string values continue to look somewhat natural

[1]: https://clickhouse.com/docs/en/operations/utilities/clickhou...

By @phoronixrly - about 16 hours
I have some experience with the 'Masking Views' functionality. If you are going to rely on it and specifically in a Rails app, know that it is against conventions and thus is generally inconvenient. This may be the same with any other framework that features DB schema migrations.

More specifically the integration of this functionality at a fortunately ex-employer was purposefully kept away from the dev team (no motivation was offered, however I suspect that some sort of segmentation was sought after) and thus did not take into account that tables with PII did in fact still need their schema changed from time to time.

This lead to the anonymizer extension, together with the confidential views to only be installed on production DB instances with dev, test, and staging instances running vanilla postgres. With this, the possibility to catch DB migration issues related to the confidential views was pushed out to the release itself. This lead to numerous failed releases which involved having the ops team intervene, manually remove the views for the duration of the release, then manually re-create them.

So,

If you plan to use this extension, and specifically its views, make sure you have it set up in the exactly same way on all environments. Also make sure that its initialisation and view creation are part of your framework's DB migrations so that they are documented and easy to precisely reproduce on new environments.

By @riskable - about 10 hours
One of the best ways to handle this sort of thing is to put things like PII in a separate database entirely and replace it with a token in the "main" database. When something like PII actually needs to be retrieved you first retrieve the token and then search the other database for said token to get the real data.

It certainly complicates things but it provides an additional security layer of separation between the PII and it's related data. You can provide your end users access to a database without having to worry about them getting access to the "dangerous" data. If they do indeed need access to the data pointed to via the token they can request access to that related database.

This method also provides more performance since you don't need to encrypt the entire database (which is often required when storing PII) and also don't need to add extra security context function calls to every database request.

By @gmassman - about 5 hours
This is a very handy postgres extension! We've been using it at my job for a couple years now to generate test datasets for developers. We have a weekly job that restores a prod backup to a temporary DB, installs the `anon` extension, and runs pg_dump with the masking rules. Overall we've been very happy with this workflow since it gives us a very good idea of how new features will work with our production data. The masking rules do need maintenance as our DB schema changes, but that's par for the course with these kinds of dev tools.

All that said, I wouldn't rely on this extension as a way to deliver anonymized data to downstream consumers outside of our software team. As others have pointed out, this is really more of a pseudonymization technique. It's great for removing phone numbers, emails, etc. from your data set, but it's not going to eradicate PII. Pretty much all anonymized records can be traced back to their source data through PKs or FKs.

By @foreigner - about 12 hours
TIL that PostgreSQL has SECURITY LABEL! It seems like this could be useful for storing all sorts of metadata about database objects, not just security stuff. E.g. like the COMMENT but not global. From reading the docs it looks like you need a "label provider" to get it to work though. I can only seem to find a few label providers around, does anyone know of one that isn't security/anonymization related and could be used more generically?
By @joram87 - about 6 hours
I've been working on something similar, starting a company around the idea! We realized that a lot of people had concerns or challenges with installing an extension on their production database and also that they wanted non-technical folks in compliance or HR to be able to configure and maintain the rules for individual employees. pgAnonymizer is a database extension but we structured ours to be a proxy server that hides/anonymizes/filters the data. We made a web dashboard that simplifies the configuration process, and allows you to configure what to do if columns get added to the database (default mask or hide new columns). We're about to go GA and if anyone has any feedback or wants to a free beta testing trial, I'd love to chat
By @daamien - about 1 hour
Hi !

I'm the main developer of this extension. Happy to answer any question you have about this project and anonymization in general!

By @nickzelei - about 11 hours
This can work pretty well if you want to either mask the data in prod or update it in place.

A good use case that comes to mind is using prod data in a retool app or something for your internal team but you want to mask out certain bits.

I’ve been building Neosync [1] to handle more advanced use cases where you want to anonymize data for lower level environments. This is more useful for stage or dev data. Then prod stays completely unexposed to anyone.

It also has a transactional anonymization api too.

[1]: https://github.com/nucleuscloud/neosync

By @sam0x17 - about 10 hours
I was actually tasked with building essentially this same thing back in 2014 when I was a junior dev for a fintech startup. They needed an anonymized version of prod database suitable for support team to pull up when trying to reproduce bugs. Did this gigantic thing that would stream the db dump into a C++ app and anonymize it on the fly. Took a similar approach to their masking they do here. Fun project. Company should have productized it.
By @dandiep - about 17 hours
This is a fantastic idea. Now how to get it on RDS…
By @pgryko - about 13 hours
Are these tools able to automatically identify PII information or do you have to specify columns and data types manually? What happens if you have PII data in a string field? Do you just rely on something like spacy to identify the PII data?
By @Cynddl - about 9 hours
I'm going to repeat myself as I do everytime I encounter such tools. These tools DO NOT provide anonymization, and especially not at the level required by the EU's GDPR (where the notion of PII does not exist).

As a computer scientist and academic researcher having worked on this topic for now more than a decade (some of my work if you are interested: [1, 2]), re-identification is often possible from few pieces of information. Masking or replacing a few values or columns will often not provide sufficient guarantees—especially when a lot of information is being released.

What this tool does is called ‘pseudonymization’ and maybe, if not very carefully, ‘de-identification’ in some case. With colleagues, reviewed all the literature and industry practices a few months ago [3], and our conclusion was:

> We find that, although no perfect solution exists, applying modern techniques while auditing their guarantees against attacks is the best approach to safely use and share data today.

This is clearly not what this tool is doing.

[1] https://www.nature.com/articles/s41467-019-10933-3 [2] https://www.nature.com/articles/s41467-024-55296-6 [3] https://www.science.org/doi/10.1126/sciadv.adn7053

By @riffraff - about 16 hours
this seems great. I wonder tho, how do you ensure new columns are masked by default? It seems a safer alternative would be to start with all columns being statically masked and only unveil them selectively.

I guess you can add some CI steps when modifying the db to ensure a give column is allowed or masked, but still, would be nice if this was defaulted the other way around.

By @sgt - about 18 hours
Just be careful that you don't anonymize your production data.