Create a table with a json column:
CREATE TABLE Doc (
id UUID PRIMARY KEY,
val JSONB NOT NULL
);
Then later it turns out all documents have user_ids so you add a check constraint and an index: ALTER TABLE Doc ADD CONSTRAINT check_doc_val CHECK (
jsonb_typeof(val)='object' AND
val ? 'user_id' AND
jsonb_typeof(val->'user_id')='string'
);
CREATE INDEX doc_user_id ON Doc ((val->>'user_id'));
I think the postgres syntax for this is pretty ugly. And if you also want foreign key constraints you still have to move that part of the json out as a real column (or duplicate it as a column on Doc). I am not sure it's even worth it to have postgres check these constraints (vs just checking them in code).I am also a little worried about performance (maybe prematurely). If that document is large, you will be rewriting the entire json blob each time you modify anything in it. A properly normalized schema can get away with a lot less rewriting?