PDA

View Full Version : Integer constant missing for SQL "Image" type?



Nils G. Svedmyr
13-Mar-2017, 04:36 PM
Hi,

1) I'm making functions for mapping between DataFlex standard data types and various SQL types back and forth.

In the Alpha2 Table Explorer I noticed the "image" SQL Type that maps to DataFlex "Binary" for a MSSQL db. However, I can't seem to find a constant integer defined - aka "SQL_IMAGE" in cli.pkg or mssqldrv.pkg. Should there be one?

2) On a related note I'm using the "DF_DRIVER_MAP_DFxxxxTO_SQLTYPE" to check the back-end for the correct type to match with a DataFlex type. I don't understand why there are map functions for DF_ASCII, DF_BINARY, DF_DATE and DF_TEXT but *not* for DF_BCD which can on MSSQL be "smallint", "int" or "numeric". Why is that?

Stephen W. Meeley
13-Mar-2017, 05:21 PM
Nils,

On the mapping, look at the CLI 6.2 Driver Build Notes in the What's New and scroll down to 6.2.0.22. There is a table of the default mapping and note that is has changed. The older driver used the SQL 2000 mapping for many years and 6.2 now uses the SQL 2008 mapping (which is actually good up though SQL 2016).

Martin Moleman
14-Mar-2017, 04:00 AM
1) From the help: DF_DATABASE_MAP_DFBINARY_TO_SQLTYPE


Microsoft SQL Server: SQL_VARBINARY (VarBinary(max)), SQL_LONGVARBINARY (image), SQL_BINARY (binary)

So for image you should use SQL_LONGVARBINARY.

Image and binary are old types, that are no longer recommended. Use varbinary(max) instead.

2) DF_BCD must take the length in account to determine the SQL type. It makes no sense to map NUM 8.0 to smallint, since smallint can only have max value 32767. The connectivity kit automatically maps to the best fitting type based on length.

Note these mappings are DataFlex to SQL mappings. They only play a role when creating new columns where you specify the DataFlex type first. This is the case when converting from embedded to SQL and in Database Builder.

In the Studio table editor, you choose the SQL type (smallint, integer, numeric,etc ) first and that maps to a DataFlex type (SQL to DF type mapping). DF to SQL type mappings don't play a role here.

There is a help topic 'Type Mappings' that has more information on DF to SQL and SQL to DF mappings.

Nils G. Svedmyr
14-Mar-2017, 04:28 AM
Hi Stephen,

Thanks for the pointer but my case is slightly more complicated. Which I should probably have tried to explain... But as often when you're right in the middle of something it can be difficult to see that you haven't described your problem sufficiently :). See also my reply to Martin.

Nils G. Svedmyr
14-Mar-2017, 05:03 AM
Thanks Martin for your replies.

1) Great, I had missed that part from the help. I do have studied the help chapter "SQL Server Type Mappings" closely.

2) "DF_BCD must take the length in account to determine the SQL type." But isn't that the same thing with DF_ASCII & DF_TEXT and let's say varchar or nvarchar? What is the difference?

I think I understand the concept of mapping DataFlex data types from and to SQL backends and it might not seem logical to you that I should bother with this but in my case it is slightly more complicated than average with "The Database Update Framework".

With the Database Update Framework it is possible to add e.g. a column to the SQL backend and the type can be any of the SQL allowed types. This is done with ESQL - just as I imaged the Studio does it. This part works just fine but it is up to the programmer to select the correct SQL type for the column by using one of the integer constants defined in cli.pkg or mssqldrv.pkg (we're talking about MS-SQL in this case). However, an important feature of the framework is that is what you might call "re-entrant" in the sense that it should be possible to execute the same code more than once and the framework logic should first check that it is OK to make the change - aka it hasn't been done before. So let's say that a new column was added with;


Get SqlAddColumn OrdHea.File_Number "NewColumn" SQL_VarChar 300 True to bOK


If that code is run again - which should be perfectly fine - the logic first need to check if the column exists and that the type & precision correlates. This is why I need the type mapping functions. Just an explanation on why I bother to create these mapping conversion functions.



One thing I don't understand regarding mapping is what happens when DataFlex doesn't match the SQL back-end because the SQL back-end specifications are just to big. Have a look at the table below with SQL data types for Microsoft SQL Server. Exactly what happens if the database value is just too big?

SQL Server Data Types

String types:



Data type
Description
Storage


char(n)
Fixed width character string. Maximum 8,000 characters
Defined width


varchar(n)
Variable width character string. Maximum 8,000 characters
2 bytes + number of chars


varchar(max)
Variable width character string. Maximum 1,073,741,824 characters
2 bytes + number of chars


text
Variable width character string. Maximum 2GB of text data
4 bytes + number of chars


nchar
Fixed width Unicode string. Maximum 4,000 characters
Defined width x 2


nvarchar
Variable width Unicode string. Maximum 4,000 characters



nvarchar(max)
Variable width Unicode string. Maximum 536,870,912 characters



ntext
Variable width Unicode string. Maximum 2GB of text data



bit
Allows 0, 1, or NULL



binary(n)
Fixed width binary string. Maximum 8,000 bytes



varbinary
Variable width binary string. Maximum 8,000 bytes



varbinary(max)
Variable width binary string. Maximum 2GB



image
Variable width binary string. Maximum 2GB




P.S. I think there might be a couple of errors in the help for the "SQL Server Type Mappings". In the "Character Types" table it e.g. says "Char(1000)" "Asc(1000)", but ASCII fields/columns can't be larger than 255 characters, can they? There are a couple of those that you might want to have a look at.