1550 - 1600 of 2705 tags for Tools


When using SAS/ACCESS to Teradata on DELL/XP with TTU7.5, the following
error may be encountered:

ERROR: Teradata connection: CLI2: ICULOADLIBERR(527): Error loading ICU libraries. .


I am using the bteq import export option,I have followed the following steps in order:

Step 1
export data from table to file

.set width 900
.export data file=myoutput.txt
Select item_id from
sample 10;

.export rest;

Step 2
creating a new table

create table db.tab2
(item_id integer)

Step 3
import data from file to table

.import data file=myoutput.txt,skip=3
.quiet on
.repeat *
using item_id(integer)
insert into db.tab2


I just installed version 13 and found that I can copy the answerset and paste into a text file, but once I hover over an Excel file the clipboard reverts to the previous contents. Has anyone come across that one?


I am using Fast export to copy a Teradata table to a flat file on a windows server.

.logtable database.logtable_tablename;
.logon terdata_box/user_id,password;
.begin export sessions 2;
.export outfile myoutputfile.txt RECORD MODE FORMAT TEXT;
select *
from databse.table
where date_column ge cast ('01/01/2010' as date format 'dd/mm/yyyy')
and key_column eq 999999999999
.end export;

The query runs fine in sql assistant producing a readable text file. When I run fexp.exe the output file has a lot of unreadable rubbish in it.

Hi all,

while mloading single table with multiple file,does teradata load these files parallely or one after the other.


Does fastload/fastexport has windows 64-bit version?

I am looking for Solution which can synchronize/replicate data between two Teradata Production boxes. i know there are couple of solution like Teradata Mover,NPARC,Golden Gate.

i would like to know if Teradata Parallel Transporter can achieve data replication/synchronization?
Objective is to move Full/Partial data between two Teradata boxes.
Is there any limitiations on TPT like Can we move Journals,Indexes(JOIN/HASH),STATs using TPT?
And how the performance of TPT is like how do you achieve the SLA of and hour?

hi all,
i am trying for a fastload inside a unix script,where in the table being inserted have six columns ,out of which three columns gets their data from flatfile,while the other three are date columns which have to be manipulated based on the date which comes along with the flat file name.

If you are going to use the teradata utilities ,will it change the Architecture or SDLC?
If yes then why and how?

I tried to use the Visual Explain tool V 13 but get the following errors:

[Teradata][ODBC Teradata Driver][Treadata Database] Internal error: Please do not resubmit the last request. SubCode, CrashCode: 0, 3701.

I inserted a query contains complex query with join, and group by clauses using "Insert Explain" option in the tool. I believe the QCD database should not be the problem, because the tool worked fine with simpler query. (No group by just simple join.)

Anyone has seen this problem before?

ps: I am using the Teradata Express DB 13 Linux VM as my database server.

Teradata SQL Assistant 12 ,I can Save all sheet to one excel file.

but Teradata SQL Assistant 13 can't, why?

SQL Assistant 13.0 has a menu button to suppress inclusion of a column of row numbers as headers in copy-pasting or exporting answersets. If you switch this feature off using the button it resets itself to the default of including the row headers every time a new answerset opens.

I have been doing teradata export and import through named pipe until i used tpt. But now after i learnt how to write a tpt, i have been using import from export operator in tpt.

One of my users is experiencing a problem while attempting to connect to our development environment via SQL Assistant. SQL Assistant opens up, and he clicks on the 'CONNECT' button. He then selects his datasource and the logon screen pops up. Once he enters his logon information and selects 'OKAY' the entire thing disappears...


I have a ERROR when I run BTEQ with QUERY_BAND option
Is supported by BTEQ QUERY_BAND ???

BTEQ Tue Jun 29 14:14:41 2010

.logon DWP/DBA,

*** Logon successfully completed.
*** Teradata Database Release is
*** Teradata Database Version is
*** Transaction Semantics are BTET.
*** Character Set Name is 'ASCII'.

*** Total elapsed time was 18 seconds.


Below is a SAS script that passes variables thru the same code many times. I would like to do the same type of thing in Teradata with my initial code looking like this for an UPDATE.

1. First, this is an UPDATE statement
2. I need to observe the current month's and prior month's MISS_PMT_COUNT over years to determine the MOST RECENT TIME when a mortgage account went from delinquent to cured and vise versa.
3. If there is some way to determine &N observations on the MIN(STATEMENT_DATE) for the pass through values, that would be great.

Thank you in advance,

Below is a SAS script that passes variables thru the same code many times. I would like to do the same type of thing in Teradata with my initial code looking like this for an UPDATE.

1. First, this is an UPDATE statement
2. I need to observe the current month's and prior month's MISS_PMT_COUNT over years to determine the MOST RECENT TIME when a mortgage account went from delinquent to cured and vise versa.
3. If there is some way to determine &N observations on the MIN(STATEMENT_DATE) for the pass through values, that would be great.

Thank you in advance,

Product ID: B035-3119-088K
Date Added to Web Site: May 27, 2010

I am getting this error while running the mload:
02:42:25 UTY2403 An invalid statement was found before the .LOGON statement.
02:42:25 UTY2410 Total processor time used = '0 Seconds'

Please help me!!!!

I have 2 BTEQs having following type of Statements. Both are submitted at the same time and both are trying to insert in the same table.




What affect do Multiload have on performance. Do MlOAD take up all resources and leave all other utilities like BTEQ and SQL Assistant with decreased performance?

I am using fast export batch mode on windows. I want to schedule fast export using batch file.
I want to know how do I include the currentdate in the export file name.

.export outfile "Export.out"

It works perfect with x86 odbc driver, but failed with x8664 odbc driver.

Does anyone has suggestion on this issue? I had installed tdicu, teragss and td odbc driver.

Error message is 'Specified driver could not be loaded due to system error 126 (Teradata)'

Can anyone help me in trying to load a pipe delimited text file with headers and footers?

I have been trying to load the file in one column VARCHAR(300) to then use some code to search for each | and extract each field in turn.

I am using BTEQ import as file is small but having trouble with loading - error message = *** Failure 3857 Cannot use value (or macro parameter) to match 'DUMMY_FIEL

script looks like:


.if errorcode !=0 then .quit errorcode



I am dealing with a very stragnge situation that I hope someone can explain. We recently upgraded to a V12 Teradata machine at our client site. We are now trying to install the V12 TTU on everyone's desktop.

The background here is that the Admin rights to desktops has been removed, and we have to ask permission desk by desk to get admin rights to install the TTU Utilities. It is very time-consuming.

Hi all,

I am the author of a documentation tool for all major DBMS on the market, and I recently added support for Teradata. I have tried it on the sample dbs that Teradata 13 ships with, as well as the system catalog itself (DBC). What I'd really like though are some beta testers to run it against some real world databases.

The tool is called SqlSpec. You give it a connection string to your database, and it generates comprehensive documentation for your db, with PK/FK graphs, dependency graphs, and lots more.

I have a binary format data files. Some columns are varchar and only spaces in the field. The space became after mload and tpump. It's correct output by fastload and tbuild with the same data file. It looks like the NULLIF clause took spaces as null in .FIELD. Any suggestion for this issue? Thanks in advance

Here is the script I used for the mload.

.LogTable "TDTEST"."TDTGT1_LT";

.LOGON xx.xx.xx.xx/TDTEST,xxxx;

ErrorTable "TDTEST"."ET_TDTGT1"

.LAYOUT file_layout;

The column headings/titles revert to "A", "B", "C" whenever I begin a query with a comment (for documentation purposes) or perform a join in a query. However, if I remove the leading comment and/or join, the proper titles appear in my answerset.

Hi Guys

Sorry for this question but I'm newbie in Teradata and did not find the answer to this in the FAQ / formal articles.

The issue is about the ability to run "select ....order by" on a table that is being loaded in parallel by mload.

Hi All.

Long time lurker, First time poster here.

I searched before posting this question but couldn't find an answer.

I have a one-step test job using BTEQ, using a VERY long SQL query (several concatenated cards). The job ran for approximately 3.5 hours, until it abended on a failure 3807.

"Failure 3807 Object ''VT_PROCESS_DATES'' does not exist."

Volatile table VT_PROCESS_DATES was created successfully at the start of the job and was used successfully (insert and update) several times while the job was running.

I tried out the Macro possibilities inside SQL Assistant. Unfortunately I saved it to the Letter "A". So now the Macro is run everey time I hit this character... Does anyone know how to delete a recorded Macro inside SQL Assistant?

Windows 7 64-bit
Teradata v12
SQLA Java 64-bit

I am trying to export a view that contains about 3500 rows, but I am having a hard exporting the entire result set outside of what is being displayed.

Currently SQLA only displays 500 rows, which is OK. When I select export all to excel, I was expecting to see all 3500 entries, but instead I end up with the 500 displayed rows only. How can I export all rows?

I have tried modifying the display result to 5000 instead of default of 500, but SQLA still displays 500 only. Is this a bug in the tool or am I doing something wrong?

I have a problem in using TPT. When i use the Oracle or informix ODBC operator in my a teradata PT job i received a ODBC connect error
or ODBC driver error.

The following is my machine description:
1: Redhat linux 2.6.9
2: Software:
Informix CSDK 2.9
Oracle database 10g

perl 5.8.8
DBI 1.608

The following is my test step:
1: When i set $ODBCHOME=/opt/teradata/client/13.0/odbc_32 and $ODBCINI=/opt/teradata/client/13.0/odbc_32/odbc.ini,


I am using mload to delete the data from a 10 TB table.
The table is partitioned based on date.
The table is having NUSI on two columns.

When we dropped the 20 partition without drop the NUSI. it taken 2 days.

Hi All,

When running the TPT scripts for loading a 12GB files into loaded table, scripts faing with below error message, where as if same scripts ran sucessfully with few 10000 records.

Teradata Parallel Transporter Version
Execution Plan generation started.
Execution Plan generation successfully completed.
Job log: /usr/tbuild/
Job id is test_load-1, running on a01faif1a
Teradata Parallel Transporter DataConnector Version
FILE_READER: Instance 1 directing private log report to 'FILE_READER_LOG-1'.

Hi All

If anyone has a script generator that will generate mload, fload, or fexp empty framework templates, please do send it to me to chiram1.etl@gmail.com


Hi all,

I'm new to Fast Load and I'm looking for some expert advise. I've tried to load data into my table using CSV file as a FLAT FILE, but I keep getting the following errors ;

Number of rec/msg:7140
Start to send to RDBMS with record 1
RECORD is too long by 21311 bytes
Field 1: 168641609

Full scripts as follows;

sessions 8;
errlimit 250;

logon cda/userid

define cust_no (decimal(14,0))
file =X:\Use Id\Cust.CSV;

begin loading d_database.table
errorfiles d_database.err1, d_database.err2;

insert into d_database.table values(:cust_no);


I tried installing Teradata Eclipse Plugin on MyEclipse7.5 version and struck with some dependency problems as mention below. I am not sure whether the Teradata Eclipse Plugin works for MYEclipse.
Can anybody assist me in resolving these?

annot complete the install because some dependencies are not satisfiable
Unsatisfied dependency: [com.teradata.datatools.dtpsupportFeature.feature.group] requiredCapability: org.eclipse.equinox.p2.iu/org.eclipse.datatools.sqltools.sql.ui/0.0.0

i want to insert some values in my table using INSERT INTO SELECT SQL as:

INSERT INTO tableName (col1, col2, col3)
SELECT col2, col3 FROM tableName;

for col1 i need to pass the value comming from file (using (:next_var)) and increment it for every row comming from the SELECT.
How to do this?

Checkpoints is used to resume the paused job, at a point where it was paused, because of the error caused by the client or by the RDBMS. Am i right with the definition.

I am trying to load two records from a notepad into a table using fastload utility, its working fine if there

is a single record, but when i add a newlinechar(char(1)), if am getting the below error.

0006 insert into samples.saat3

Number of recs/msgs: 3780
Starting to send to RDBMS with 1 record
Incorrect number of bytes returned from a File Read!
Expected: 12, Received:2

The below are the scripts used:

Records present in the notepad:
raghu 300 |
ram 200 |

Scripts in fastload utility

create table samples.saat3

I have been asked by the server admins to provide a space requirement to install the following Teradata 12 utilities on a SUN Solaris SPARC server:

(1) CLI and related security libraries
(2) BTEQ
(3) FastLoad
(4) MultiLoad
(5) FastExport


What would cause a MLoad to abend with this error: 2575: Timed out waiting for lock on

No other job is accessing the table, and all I am trying to do is load data into staging. I have never had this error before on loading.

Any ideas how I can get around this?



I would like to be able to produce a file by running a command or batch which basically exports a table or view (SELECT * FROM tbl), in text form (default conversions to text for dates, numbers, etc are fine), tab-delimited, with NULLs being converted to empty field (i.e. a NULL colum would have no space between tab characters, with appropriate line termination (CRLF or Windows), preferably also with column headings.

This is the same export I can get in SQL Assistant 12.0, but choosing the export option, using tab delimiter, setting my NULL value to '' and including column headings.

Hi All,

What version of the Teradata utilities (SQL Assistant, Teradata Administrator, ODBC drivers) will work with a laptop that has Windows 7 installed? Where can I find the correct download site for this?



Teradata experts,

I use Teradata Assistant to view our SQL Server. I recently upgraded to v. 13, and I can no longer see all my tables in Explorer view, though I can still query them. It looks like Explorer View is only allowing a set number of tables (around 40).

Hi, I have a dilemma…I have a table A that has some special characters in a column aa. When inserting into table B, the following SQL gives the untranslatable character errorInsert into B.bb sel CASE WHEN A.aa IS NULL OR A.aa = '' THEN '*' ELSE A.aa END AS A.aa from AThe column DDL is VARCHAR(16) CHARACTER SET LATIN NOT CASESPECIFICIn both that A and B tables.However, If I skip this test and just insert the column A.aa as it is, I can do something like this later:Insert into B.bbsel A.aa from A ;update B.bbset B.bb = '*'where B.bb is null or B.bb = '';

Hi AllI have requriment to test a database is up or down. I selected BTEQ to do my testing and below is my logic. As bteq is continously trying to logon, my script is running continously with out existing. BTEQ Interactive Mode:$$$bteq Teradata BTEQ for UNIX5. Copyright 1984-2007, NCR Corporation. ALL RIGHTS RESERVED. Enter your logon or BTEQ command:.logon *** Warning: DBS CRASHED OR SESSIONS RESET. RECOVERY IN PROGRESS> *** Warning: DBS CRASHED OR SESSIONS RESET. RECOVERY IN PROGRESS> *** Warning: DBS CRASHED OR SESSIONS RESET. RECOVERY IN PROGRESS>assume is valid UNIX Server with NO TERADATA DATABASE because my test will success for valid server and valid database.Could someone help me in identifying the right bteq paramerters.Thanks

Hi All, I have requirement where I have to ­build(generate) an excel sheet from a table but it can have more­ then 2 -3 lakh records and as we all know one excel sheet­ cant have more then 65K+ records so more then 65K records­ should be on next sheet in the same excel sheet and this ­process will be part of job on UNIX Box. using BTEQ­ EXPORT ? or anything else? And second that excel­ sheet should be stored into database table as BLOB data. does­any buddy have exp in java or c lang. bec I am not sure how we­can achieve this. Please share your ideas if anybuddy ­have faced this type of situation. Thanks