Skip to content

Conversation

@davidhcoe
Copy link
Contributor

After the update to adjust the Timestamp precision, the JSON path (which is hit when using SHOW commands) was still using nanoseconds even though the schema was specifying the precision as microseconds, resulting in downstream C# callers getting the error

System.ArgumentOutOfRangeException: Ticks must be between DateTime.MinValue.Ticks and DateTime.MaxValue.Ticks.

@github-actions github-actions bot added this to the ADBC Libraries 19 milestone Jun 13, 2025
@davidhcoe davidhcoe changed the title fix(go/adbc/driver/snowflake): The Timestamp values in the JSON path fix(go/adbc/driver/snowflake): Adjust the precision of the Timestamp values in the JSON-only path Jun 13, 2025
Copy link
Contributor

@CurtHagenlocher CurtHagenlocher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should have a test in Go. That probably requires creating a table that has a primary key? There should be existing tests for returning a schema whose setup can be reused for that, I think.

@davidhcoe
Copy link
Contributor Author

I think this should have a test in Go. That probably requires creating a table that has a primary key? There should be existing tests for returning a schema whose setup can be reused for that, I think.

I started in Go and deleted it because it was specific to a single instance. I don’t think it would be too hard to do in Go what exists in C#.

@davidhcoe
Copy link
Contributor Author

I think this should have a test in Go. That probably requires creating a table that has a primary key? There should be existing tests for returning a schema whose setup can be reused for that, I think.

I started in Go and deleted it because it was specific to a single instance. I don’t think it would be too hard to do in Go what exists in C#.

Actually, the problem doesn't occur in Go. The problem was happening because the schema was reported by the Go driver as microseconds and the value was in nanoseconds, so when the IArrowArray.ValueAt method was called, it was failing on the internal ValueConverter at:

 ArrowTypeId.Timestamp:
                    return (array, index) => ((TimestampArray)array).GetTimestamp(index);

which goes to the internal TimstampArray here:

switch (type.Unit)
{
   case TimeUnit.Nanosecond:
       ticks = value / 100;
       break;
   case TimeUnit.Microsecond:
       ticks = value * 10; // <------ was incorrect because unit was microsecond and value was nanosecond
       break;

All of that to say, I don't know what a separate Go test will really do for that scenario.

@CurtHagenlocher
Copy link
Contributor

I'm not sure how it can be said that the problem doesn't exist in Go; if the fix is in the driver then the problem is in the driver. If the schema reported by the Arrow array stream is different than the schema reported by the record batch, then that's the bug and it can be tested for in Go.

Copy link
Contributor

@CurtHagenlocher CurtHagenlocher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me!

@davidhcoe
Copy link
Contributor Author

davidhcoe commented Jun 15, 2025 via email

@lidavidm
Copy link
Member

If there's only one row in the result, can we assert that fact? The loop is rather misleading in this case.

suite.True(rdr.Next())
rec := rdr.Record()

suite.True(rec.NumRows() == 1)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suite.Equal?

seqIdx := getColIdx("key_sequence")
createdIdx := getColIdx("created_on")

if dbIdx == -1 || schemaIdx == -1 || tableIdx == -1 || colIdx == -1 || seqIdx == -1 || createdIdx == -1 {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suite.Equal instead of panic?

Or just panic inside of getColIndex with the column name.

}

query = fmt.Sprintf("DROP TABLE %s.%s.%s", suite.Quirks.catalogName, suite.Quirks.schemaName, tempTable)
stmt, _ = cnxn.NewStatement()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't being cleaned up, though I suppose it doesn't seem to matter

stmt, _ = cnxn.NewStatement()
suite.Require().NoError(stmt.SetSqlQuery(query))
_, err = stmt.ExecuteUpdate(suite.ctx)
defer rdr.Release()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this a double release? And why is it all the way down here?

@lidavidm lidavidm merged commit c3b915a into apache:main Jun 16, 2025
45 checks passed
@davidhcoe davidhcoe deleted the dev/snowflake-timestamp-precision-json branch October 24, 2025 10:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants