Error 6706 teradata

“RDBMS code 6706: The string contains an untranslatable character.” Did you get this type of error in DataStage while loading data into Teradata or any other database. If yes, following tutorial can help you to overcome the error you received. This error comes when data contains any non printable characters. If you can see any Control-M (^M) characters in file, you can delete them directly. But what if we can’t see them? Sometimes we can’t see non printable characters even through vi editor in unix. I had same experience with one file and I explained it below with my example.

RDBMS CODE 6706: THE STRING CONTAINS AN UNTRANSLATABLE CHARACTER – SOLUTION

RDBMS-CODE-6706

“RDBMS code 6706: The string contains an untranslatable character.”

Did you get this type of error in DataStage while loading data into Teradata or any other database. If yes, following tutorial can help you to overcome the error you received.

This error comes when data contains any non printable characters. If you can see any Control-M (^M) characters in file, you can delete them directly. But what if we can’t see them?

Sometimes we can’t see non printable characters even through vi editor in unix.

I had same experience with one file and I explained it below with my example.

See the below file content, when I use “vi file1.txt” there is nothing in between LINES and GENERATION words except a ‘-’.
unix-file-with-non-printable-characters
When I investigated this file for above error, I found that something wrong with this record only. I was able to load all the records into database except the above one record.

What I did here is, I used ‘cat’ command to check for any non printable characters in the file. Below command came in handy for me to check non printable characters.

cat -A

This ‘-A’  is equivalent to -vET
Exact meaning of the cat options from documentation is.
-v, –show-nonprinting

use ^ and M- notation, except for LFD and TAB
-E, –show-ends
display $ at end of each line
-T, –show-tabs
display TAB characters as ^I

non-printable-characters-in-unix-file
Now we found that there are some non printable characters in our file, our next step is to delete these from our file.
I used ‘tr’ command to delete these non printable characters.

tr -cd ‘11121540-176’ < file1.txt > file2.txt

This will delete (-d) anything not in the specified set (-c). The set contains 11, 12 and 15, which are tab, LF and CR, and then everything from 40 to 176, which are all ASCII printable characters.

When the -c and -d options of the tr command are used in combination like this, the only characters tr writes to the standard output stream are the characters we’ve specified on the command line.

Although it may not look very attractive, we’re using octal characters in our tr command to make our programming job easier and more efficient. Our command tells tr to retain only the octal characters 11, 12, and 40 through 176 when writing to standard output. Octal character 11 corresponds to the [TAB] character, and octal 12 corresponds to the [LINEFEED] character. The octal characters 40 through 176 correspond to the standard visible keyboard characters, beginning with the [Space] character (octal 40) through the ~ character (octal 176).

These are the only characters retained by tr and the rest are filtered out, leaving us with a clean ASCII file.
After-deleting-non-printable-characters

After applying the above ‘tr’ command ‘-’ is disappeared from the record, it means it’s not a real hyphen(-) and it was formed by some non printable characters.
Please share your experience through comments if you faced similar problem or if you solved your problem with the above solution.

Comments

comments

Hello,

While I am Inserting data in the table, it returns error:
«»6706: The string contains an untranslatable character»»

My table is a MULTISET and the table definition is:
DATE_LOADED DATE FORMAT ‘YY-MMMM-DD’ COMPRESS ,
PERIOD_TYPE CHAR (1) ,
PIPELINE_ID BIGINT NOT NULL ,
EMPLOYEE_NO VARCHAR(50) ,
EMPLOYEE_FIRST_NAME CHAR(100) ,
EMPLOYEE_LAST_NAME CHAR(100) ,
EMPLOYEE_FULL_NAME CHAR (200) ,
TEAM_ID INTEGER ,
TEAM_NAME CHAR(250) ,
SALES_AREA_ID INTEGER ,
SALES_AREA_NAME VARCHAR(250) ,
IP_NUMBER BIGINT ,
CUSTOMER VARCHAR(100) ,
INDUSTRY_ID INTEGER ,
INDUSTRY VARCHAR(250) ,
SUBINDUSTRY VARCHAR(100) ,
GAIN_CATEGORY_ID INTEGER NOT NULL ,
GAIN_CATEGORY VARCHAR(50) ,
MAINPRODUCTLINEID INTEGER ,
MAINPRODUCTLINE VARCHAR(250) ,
VAPM_THIS_YEAR DECIMAL(18,6) ,
ANNUALISED_VAPM DECIMAL(18,6) ,
STATUS_ID INTEGER NOT NULL ,
STATUS_DESC VARCHAR(50) ,
LIKELIHOOD SMALLINT ,
WONHELP VARCHAR(250) ,
TURNOVER_VALUE VARCHAR(50) ,
BANK_ID INTEGER ,
BANK VARCHAR(250) ,
BANK_ORDER CHAR(10) ,
REASON_ID INTEGER ,
REASON VARCHAR(250) ,
LEAD_LOSS_SOURCE_ID INTEGER ,
LEAD_LOSS_SOURCE VARCHAR(250) ,
OPENED_DATE TIMESTAMP(6) ,
DATE_ENTERED TIMESTAMP(6) ,
DATE_ANTICIPATED TIMESTAMP(6) ,
MONTH_OPENED INTEGER ,
MONTH_ENTERED INTEGER ,
MONTH_ANTICIPATED INTEGER ,
YEAR_OPENED INTEGER ,
YEAR_ENTERED INTEGER ,
YEAR_ANTICIPATED INTEGER ,
BBG_ID INTEGER ,
PRIMREFEMPLOYEEID INTEGER ,
PRIMREFCOUNTRYID INTEGER ,
SECREFEMPLOYEEID INTEGER ,
SECREFCOUNTRYID SMALLINT ,
PRIMREFFIRSTNAME VARCHAR(50) ,
PRIMREFLASTNAME VARCHAR(50) ,
SECREFLASTNAME VARCHAR(50) ,
SECREFFIRSTNAME VARCHAR(50) ,
INITIALRWA DECIMAL(18,2) ,
RETURNRWA DECIMAL(6,2) ,
ZONE_NAME VARCHAR(250) ,
COUNTRY VARCHAR(250)

And the Insert statement is:
CAST( T20.REP_DATE AS DATE )
AS DATE_LOADED
, ‘D’ AS PERIODTYPE
, T1.DEALID AS PIPELINE_ID
, T2.STAFFNUMBER AS EMPLOYEE_NO
, TRIM ( T2.FIRSTNAME ) AS EMPLOYEE_FIRST_NAME
, TRIM ( T2.LASTNAME ) AS EMPLOYEE_LAST_NAME
, TRIM ( T2.FIRSTNAME ) || ‘, ‘ || TRIM ( T2.LASTNAME )
AS EMPLOYEE_FULL_NAME
, T1.TEAMID AS TEAM_ID
, T9.TEAM AS TEAM_NAME
, T1.REGIONID AS SALES_AREA_ID
, TRIM( T3.REGION ) AS SALES_AREA_NAME
, T1.CUSTOMERSYSTEMNUMBER AS IP_NUMBER
, T1.CUSTOMERNAME AS CUSTOMER
, T1.INDUSTRYID AS INDUSTRY_ID
, T6.INDUSTRYNAME AS INDUSTRY
, T1.SUBINDUSTRY AS SUBINDUSTRY
, T14.CUSTOMERTYPEID AS GAIN_CATEGORY_ID
, T14.CUSTOMERTYPE AS GAIN_CATEGORY
, T1.PRODUCTID AS MAINPRODUCTLINEID
, T15.PRODUCTNAME AS MAINPRODUCTLINE
, ( T13.TOTALINCOMETHISYEAR / T19.EXCHANGE_RATE )
AS VAPM_THIS_YEAR
, ( T13.ANNUALISEDINCOME / T19.EXCHANGE_RATE )
AS ANNUALISED_VAPM
, T5.DEALSTATUSID AS STATUS_ID
, T5.DEALSTATUS AS STATUS_DESC
, SUBSTR(T4.DEALSTAGE, POSITION (‘(‘ IN T4.DEALSTAGE )+1,
POSITION (‘%’ IN T4.DEALSTAGE ) —
POSITION (‘(‘ IN T4.DEALSTAGE ) — 1 )
AS LIKELIHOOD
, T7.WONHELP AS WONHELP
, T8.TURNOVERLEVEL AS TURNOVER_VALUE
, T1.BANKID AS BANK_ID
, T10.BANKNAME AS BANK
, NULL AS BANK_ORDER
, T1.REASONID AS REASON_ID
, T11.REASON AS REASON
, T1.LEADLOSSSOURCEID AS LEAD_LOSS_SOURCE_ID
, T12.LEADLOSSSOURCE AS LEAD_LOSS_SOURCE
, CAST ( T1.OPENDATE AS DATE )
AS OPENED_DATE
, CAST ( T1.DATEFIRSTENTERED AS DATE )
AS DATE_ENTERED
, CAST ( T1.ANTICIPATEDCOMPLETIONDATE AS DATE )
AS DATE_ANTICIPATED
, EXTRACT ( MONTH FROM T1.OPENDATE )
AS MONTH_OPENED
, EXTRACT ( MONTH FROM T1.DATEFIRSTENTERED )
AS MONTH_ENTERED
, EXTRACT ( MONTH FROM T1.ANTICIPATEDCOMPLETIONDATE )
AS MONTH_ANTICIPATED
, EXTRACT ( YEAR FROM T1.OPENDATE )
AS YEAR_OPENED
, EXTRACT ( YEAR FROM T1.DATEFIRSTENTERED )
AS YEAR_ENTERED
, EXTRACT ( YEAR FROM T1.ANTICIPATEDCOMPLETIONDATE )
AS YEAR_ANTICIPATED
, T1.DGID AS BBG_ID
, T1.PRIMREFEMPLOYEEID AS PRIMREFEMPLOYEEID
, T1.PRIMREFCOUNTRYID AS PRIMREFCOUNTRYID
, T1.SECREFEMPLOYEEID AS SECREFEMPLOYEEID
, T1.SECREFCOUNTRYID AS SECREFCOUNTRYID
, T1.PRIMREFFIRSTNAME AS PRIMREFFIRSTNAME
, T1.PRIMREFLASTNAME AS PRIMREFLASTNAME
, T1.SECREFLASTNAME AS SECREFLASTNAME
, T1.SECREFFIRSTNAME AS SECREFFIRSTNAME
, T1.INITIALRWA AS INITIALRWA
, T1.RETURNRWA AS RETURNRWA
, T16.ZONENAME AS ZONE_NAME
, T17.COUNTRY AS COUNTRY

Can someone please guide?

Thanks,
Ali

Read these next…

  • Curated Green Brand Rep Wrap-Up: January 2023

    Green Brand Rep Wrap-Up: January 2023

    Spiceworks Originals

    Hi, y’all — Chad here. A while back, we used to feature the top posts from our brand reps (aka “Green Gals/Guys/et. al.) in a weekly or monthly wrap-up post. I can’t specifically recall which, as that was approximately eleven timelines ago. Luckily, our t…

  • Curated Help with domain controller setup

    Help with domain controller setup

    Windows

    I just got a new job as the only IT person for a business with around 270 employees (I would say probably less than half use computers) They don’t have any policies or procedures when it comes to IT, as they have never had an IT person. My background cons…

  • Curated Malicious URLs

    Malicious URLs

    Security

    We have firewall, we have endpoint protection, we have Safe links and Attachments for Office 365 (Microsoft Defense for Office 365 Plan 1), and still receiving links that lead to malicious web sites.It seems like security companies still didn’t develop a …

  • Curated Snap! -- Old Batteries, Lovable Bots, Quantum Breakthrough, Should We Trust AI?

    Snap! — Old Batteries, Lovable Bots, Quantum Breakthrough, Should We Trust AI?

    Spiceworks Originals

    Your daily dose of tech news, in brief.

    Welcome to the Snap!

    Flashback: February 8, 1996: The massive Internet collaboration “24 Hours in Cyberspace” takes place (Read more HERE.)

    Bonus Flashback: February 8, 1974: Americans end outer spa…

  • Curated Large collection of Mac Minis

    Large collection of Mac Minis

    Best Practices & General IT

    We are getting rid of a lot of older equipment that doesn’t have a purpose anymore on our campus. Most of it is 2010 and 2014 Mac Minis. When they were purchased, they were the absolute base model, so nothing special about them. I’ve reached out to multip…

Untranslatable Character Data Handling in Datastage

«Failed. 6706: The string contains an
untranslatable characters»
DK®

While loading the data into teradata tables using datastage jobs we are getting errors like «SELECT/Loading Failed. 6706:
The string contains an untranslatable character.»
DK®

This error usually comes when
a Junk/Bad/Latin character come across when selecting/loading from/to column of
a table using some function like cast (), coalesce(),trim() etc. Sometimes we
are getting LATIN/BAD/JUNK characters in data and datastage not able to load
those characters to database.
DK®

Logic 1
:
DK®

If you are using
Teradata Connector Stages to Select/Load the data from/to Teradata database.
DK®


Change Following Properties in Your Datastage job to load the untranslatable char data
.DK®

Job
Property
=>
«NLS»=>»Default map for Stages»=UTF8/UTF16

Properties
in Teradata Connector Stage

:
DK®

«Transaction Mode«                             :TERADATA

«Client Character
Set
«                          :UTF16 or UTF8

«Automap
Characterset coding
«     :Yes

«ArraySize»
                                             :1

«RecordCount«                                       :1

«Variant«                                                   :12
or 13 or 14  or 15- This property by
default come as 8.1 or so, Change it according to your  current teradata version.

Logic 2 :DK®

—————

Your Parallel «NLS»
Map should be set to ISO-8859-1, which
will map cleanly to LATIN1_01A.

And all of your character
columns should NOT have extended properties set to Unicode.

Note :  If your source  data is 
coming as «Unicode» and you are reading the data using Extendedproperty as “Unicode”, you need to
perform a «UstringToString» conversion,
before loading the data to teradata.  That
should clear up your bad character UNLESS the bad character exists in source
data.
DK®

Points need to Keep in mind for
handling Untranslatable Characters in Datastage Jobs :
DK®

————————————————————————————————————-

1. Please make sure that, If
you are reading and writing the data using connector stages with same
«Client Character Set».
DK®

i.e 

While reading the data
using «Client Character Set» = ASCII then always use «Client
Character Set» = ASCII during loading the data.

While reading the data using
«Client Character Set» = UTF8 then always use «Client Character
Set» = UTF8 during loading the data.

 While reading the data using «Client
Character Set» = UTF16 or UTF8 then always use «Client Character
Set» = UTF16 or UTF8 during loading the data.

2.  Always keep 
Job Property => «NLS»=>»Default map for
Stages»=UTF8/UTF16 for loading the untranslatable Char Data into table.
DK®

3. You can use the proper
“Source TO Target”  value for the
translation. e.g. LATIN_TO_UNICODE.

please check “Show Table” to
verify any character set is specified for column in table definition and choose
character set translation string accordingly e.g. LATIN_TO_UNICODE, UNICODE
_TO_ LATIN etc .
DK®

Moderators: chulett, rschirm, roy

shravya346

Participant
Posts: 9
Joined: Wed Dec 07, 2011 3:01 pm
Location: SEATTLE

Teradata 6706 error

when i am trying to load a mainframe file to teradata , i am getting the following error :

Teradata_Connector_6,0: RDBMS code 6706: Field CARD_TRACK1 (CC_TeraAdapter::selectLoadErrorTable1, file CC_TeraAdapter.cpp, line 7,430)

I am able to load the same file into oracle table but not in TD .

the error indicates untranslatable character in the data .

What do i need to do for the data to be loaded into the TD table …..I have tried changing the field to UNICODE character in TD table definition and even then its failing.

ray.wurlod

Participant
Posts: 54604
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post

by ray.wurlod » Wed Dec 07, 2011 3:12 pm

Welcome aboard.

What method (that is, what stage type and what settings) are you using to write to Teradata?

IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.

shravya346

Participant
Posts: 9
Joined: Wed Dec 07, 2011 3:01 pm
Location: SEATTLE

Post

by shravya346 » Wed Dec 07, 2011 3:18 pm

I am using

seq file >>copy>>TD connector

TD Connector :
Access Method : BULK

shravya346

Participant
Posts: 9
Joined: Wed Dec 07, 2011 3:01 pm
Location: SEATTLE

Post

by shravya346 » Thu Dec 08, 2011 6:14 pm

How do i trim these special characters in datastage , is there any way of doing this in DS , if not how can i handle that while extracting from oracle database .

Aruna Gutti

Premium Member
Premium Member
Posts: 145
Joined: Fri Sep 21, 2007 9:35 am
Location: Boston

Post

by Aruna Gutti » Fri Dec 16, 2011 11:28 am

Hi Sravya,

What version or Teradata you are using?

We set up our teradata connector stage with following options and the special characters you showed in your post get loaded through DataStage.

Variant:12
Transaction Mode: ANSI
Client Character Set: LATIN1_0A
Automap Characterset coding: Yes

We configured our database to accept LATIN1_0A character set.

Hope this helps.

Aruna.

kwwilliams

Participant
Posts: 437
Joined: Fri Oct 21, 2005 10:00 pm

Post

by kwwilliams » Fri Dec 16, 2011 1:31 pm

Your Parallel NLS Map should be set to ISO-8859-1, which will map cleanly to LATIN1_01A. And all of your character columns should NOT have extended properties set to Unicode.

If your source is Unicode, you need to perform a Ustring to String conversion, before landing to teradata. That should clear up your bad character UNLESS the bad character exists in source data.

Problem

Data sent from SAP through MQ has a description ‘Black & White Dècor Collection’ . While loading the data into a Teradata table (defined as unicode) via a MQ connector, the Datastage job is failing with the
error: ‘Td_StgItemGroup,0: [IIS-CONN-TERA-005004] RDBMS code 6706: The string contains an untranslatable character’ .
.

Cause

Incoming data has to match the outgoing data (ie. if incoming data is set to UTF8, then outgoing data has to be UTF8 also).

Resolving The Problem

In the WebSphere MQ Connector stage, the CCSID (Coded Character Set Identifier) has to be set to UTF8 also (ie. 1207).

[{«Product»:{«code»:»SSVSEF»,»label»:»IBM InfoSphere DataStage»},»Business Unit»:{«code»:»BU059″,»label»:»IBM Software w/o TPS»},»Component»:»—«,»Platform»:[{«code»:»PF002″,»label»:»AIX»},{«code»:»PF010″,»label»:»HP-UX»},{«code»:»PF016″,»label»:»Linux»},{«code»:»PF027″,»label»:»Solaris»}],»Version»:»8.7;8.5;8.1″,»Edition»:»»,»Line of Business»:{«code»:»LOB10″,»label»:»Data and AI»}}]

Hello @tjoshua52 ,

There are many reasons may lead error 6706 for example your source data contains double-bytes chars (in this case you need change the field to CHARACTER SET UNICODE), or there is some rare chars which is not accept by Teradata by default (in this case you need modify the TD setting «AcceptReplacementCharacters = TRUE»). Anyway you need to know what’s the exact reason leads the error.  Simply substitute all such chars may leads other potential issues eg PK violation (if such chars were part of the PK) or partial data losses (in fact it’s meaningful chars).

You can check the error table (E0/E1/E2) and log table (L0) in Teradata dabase to help to determine which chars lead error.

If you know which chars lead the error then you can use «Substituting or Deleting Source Characters Unsupported by the Selected Character Set», detailed usage can be found in user guide.

If you still have not clue to solve the problem then I’d like suggest you create a support ticket in Salesforce and get support team help to fix the issue.

Regards,

John.

Понравилась статья? Поделить с друзьями:
  • Error 666 санс
  • Error 666 sans wiki
  • Error 666 sans theme
  • Error 654 minecraft
  • Error 651 как исправить