I’m trying to ingest an object from an Oracle source via Datastream, but one of the columns is not being pulled across. The schema information is saying that the source data type is CLOB and the Datastream documentation says this is supported (as a STRING) [1].
However, when I use BigQuery as the destination the column is ignored/ isn’t even created. When I try and use GCS (JSON w/ Avro schema) the data for the column comes through as nulls and the schema information says:
{“name”:“<COL_NAME>”,“type”:{“type”:“null”,“logicalType”:“unsupported”},“default”:null}
I’m expecting this data to be present and the source schema in the datastream UI even says ‘NULLABLE: No’
Any insight into why this might be happening or how to rectify would be appreciated. Is it the case that the field could be too large at source for either BQ or GCS to handle?
[1] https://cloud.google.com/datastream/docs/destination-bigquery#map-data-types