Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Python] It would be nice if schemas could support optional columns #43626

Open
mmcdermott opened this issue Aug 9, 2024 · 3 comments
Open

Comments

@mmcdermott
Copy link

Describe the enhancement requested

I tried to check existing issues and did not see this discussed anywhere previously, but if I missed something I apologize. I am involved with the MEDS project which uses parquet files as its storage format with apache pyarrow. We have a number of schemas where only a subset of columns are mandatory, but other columns may be either optional (e.g., if a column by that name is present, it must have a certain type), or some schemas can accept additional columns of arbitrary types. It would be nice if there were a way to handle this directly in the notion of a PyArrow schema, e.g., instead of something like this

label_schema = pa.schema(
    [
        ("patient_id", pa.int64()),
        ("prediction_time", pa.timestamp("us")), 
        ("boolean_value", pa.bool_()), # Optional
        ("integer_value", pa.int64()), # Optional
        ("float_value", pa.float64()), # Optional
        ("categorical_value", pa.string()), # Optional
    ]
)

Where we need to dynamically filter the columns to those present on the fly, we could have something like this:

label_schema = pa.schema(
    [
        ("patient_id", pa.int64()),
        ("prediction_time", pa.timestamp("us")), 
        ("boolean_value", pa.optional(pa.bool_())),
        ("integer_value", pa.optional(pa.int64())),
        ("float_value", pa.optional(pa.float64())),
        ("categorical_value", pa.optional(pa.string())),
    ]
)

And then when using something like df.to_arrow().cast(label_schema) the system naturally errors if mandatory columns are missing and casts optional columns that are present to their mandatory types but doesn't error if an optional column is missing.

Component(s)

Parquet, Python

@jorisvandenbossche
Copy link
Member

The Schema object is very much tied to actual data (i.e. a RecordBatch or Table having a schema), and in that context pyarrow doesn't really support such notion of optional column (in an actual table, the columns are either present or are not).

So I think adding that concept to just a Schema seems unlikely that we would want to do that. But, we can maybe look at ways to make your use case easier to do, as it is definitely a valid and logical thing to do.

One idea could be to add an option to Table.cast(schema) to ignore columns in the target schema that are not present in the calling object (i.e. essentially take the calling table's set of columns as the ground truth for the columns of the result, and only use the passed schema to lookup the type for each column).

Another idea could be to make it easier to select a subset of columns in the schema. For example, assume that a Schema had a select method to select a subset of the fields of the schema, one could do something like (in two steps):

arrow_table = df.to_arrow()
arrow_table = arrow_table.cast(label_schema.select(arrow_table.schema.names)

@mmcdermott
Copy link
Author

mmcdermott commented Aug 13, 2024 via email

@mmcdermott
Copy link
Author

In case it is helpful to any folks dealing with this issue, I built this which might be useful: https://github.com/mmcdermott/flexible_schema/tree/main

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants