1550 - 1600 of 2749 tags for Tools


I have to concatenate first name, last name and zip code.

The concatenation should not have vowels"aAeEiIoOuU" and any special characters.

I have the logic but it's removing only single character (U).

Hi All,

I'm new into teradata database. I have download the installer and install it on my laptop. I want to create a new database into my laptop for testing purposes. What/where utility should i use to create a new database?

2. A user reported to me that they have some blocking processes and other jobs can't continue working. Please find below some of my questions:

a. What is the tool i can use to verify the processes?
b. What is the select statement needed in order to identify the blocking process?
c. how to kill the blocking process?

I am trying to explore the usage of SESSION and CHECKPOINT in Teradata FastExport/FastLoad/Mload and TPT-Teradata Parallel Tranporter.
As SESSION parameter if we dont specity in TTUs, it will take default as 1 for each AMP for 100 AMP system it sets as 100 sessions default. which can be expensive for small table.

But my question is what are factors/criteria we need to consider when you use SESSION and CHECKPOINT parameter in TTUs and TPT? Which impacts perforamce of export and load.

Can we call a BTEQ script from another BTEQ script? I am working on MS-DOS.

Also I have another question on "activitycount".

Is it possible to get activitycount on any INSERT|UPDATE|DELETE operation on a table? If yes how.

The EXPORT operator job ran successfully on windows by TTU13.10. However failed on Linux same TTU version. Does anyone face the similar issue? I do think it's teradata bug.

============= Error =================
Teradata Parallel Transporter Export Operator Version
myexport: private log not specified
myfile: TPT19222 Operator instance 1 processing file '/build/file1.dat'.
myexport: connecting sessions
TPT_INFRA: TPT02345: Error: Cannot create column schema, status = Schema Error
myexport: TPT12111: Error 44 creating column schema
myexport: disconnecting sessions

ouput report :

A Total Records Read = 35000
B Total ErrorTable 1 = 1250 (Not loaded due to error)
C Total ErrorTable 2 = 30 (Duplicate UPIs only)
D Total Inserts Applied = 33700
E Total Duplicate Rows = 20

ouput report :

A Total Records Read = 35000
B Total ErrorTable 1 = 1250 (Not loaded due to error)
C Total ErrorTable 2 = 30 (Duplicate UPIs only)
D Total Inserts Applied = 33700
E Total Duplicate Rows = 20

how to append date to export filename in BTEQ script.

.export file =export_file_name_09092010


I am testing the performance of following:

1. Insert data from stage table to Target using BTEQ (Insert/Select)


2. Fastexport data from stage table and fastload it into target table


We are trying to get the MQ message from MQ series and loading it using MLOAD utility

using we are getting following error message

**** 12:01:53 UTY4015 Access module error '35' received during 'read' operation

number '0': 'EOF encountered before end of record'

Hi guys,

What document can I find all the TPT error codes?

Would be helpful if you can also list documents for the error codes of the other tools also



I have a date column in my table reporting_date (date format 'YYYY-MM-DD')
I would like to change the column so I just show the month and year, I do not want the actual day of the month.

I have tried this

select income_month (format 'yyyy-mm'), income from D_CFAEISDB.FTP_INCOME

Hi Guys,

I'm loading flat files into a staging table using the LOAD operator (no indexes). Once the load is done, i do a select * on that table and insert into a allready populated table with a unique ppi using a DDL Operator.
My problem is that if the insert encounters a duplicate record, the entire batch insert is skipped. So no records are inserted. Is there a way for me to just ignore duplicate records like i can do when using the load operator?

I have changed my script to use an EXPORT and the IMPORT instead of the select * insert into, but that results in slower performance.

I am looking for a reference, which I'm sure I've seen before, that lists out the differences between TPT (Or Teradata PT) and the other Utilities like Fastload and Multiload. This is very useful when converting existing scripts over to TPT.

Hi all,
What is the differenece between the query performance in SQL assistant and BTEQ.i observed a significant improvement in performance of a query in BTEQ compared to when run in SQL assistant.Is there any specific reason for this.

Thanks and Regards

Is this a user only forum or is this also a way to feed back to Teradata Developers?

This would be a good oportunity for feedback to developers. Many of the tools being produced are fantastic ideas but released not quite ready for prime time. A paste to post this get other users feedback as well as maybe some developer feedback would be great.


I am new to teradata , trying to understand how does teradata does joins internally.

I am going through the teradata documentation and i have the below question.

table structures used in query:
(Employee_Number INTEGER NOT NULL,
Location_Number INTEGER,
Dept_Number INTEGER,
Emp_Mgr_Number INTEGER,
Last_Name CHAR(20),
First_Name VARCHAR(20),
Salary_Amount DECIMAL(10,2))
UNIQUE PRIMARY INDEX ( Employee_Number )
INDEX ( Job_Code )
INDEX ( Dept_Number );


I have SQL that needs to run some preprocessing creating a volatile table to be exported to a file. If I use a setup step TPT is using a different session for each step and the volatile table goes away. Anyone have a work around?

Any idea why TPT is using more spool than the same query in other tools such a SQL Assistiant?

Hello everyone,

I am new to teradata and I am trying to upload a file using fast load.
When I run the batch file I get the following error in the log file:

Not enough fields in vartext data record number: 1

In the teradata table, I have varchar (100) for all the colums, so not sure what is going on.

My script is the following:

sessions 1;
errlimit 10;
logon DWSANA/username,password;

set record vartext;
begin loading D_CFAEISDB.FTP_profile_fl01 errorfiles d_cfaeisdb.Err1,d_cfaeisdb.Err2;
DEFINE FILE=\\is-m-54lxx-fs12\DBP_PLANNING\Fast Load\FTP profile.txt;

then I checked whether it is really bypassing login username and password.
I tried to login with the following command
but it didn't allow me to login instead it prompted for username and password.


Seeking configuration assistance for TD SQL Assistant v13.x TO BYPASS CONNECTION PERFORMANCE LIMITATIONS

Hello, I've recently had SQL Assistant 13.0 installed on a new machine and had my data copied including my old history, however when I tried to load the data into the Access DB it looks fine in Access, but SQL Assistant won't show the SQL string - only the rows and run date (I mapped all the fields correctly and checked the field types etc on each table).


Just upgraded to SQL Assistant 13 - and I'm getting very frustrated. Normally, I would have a long page of code (2-3k lines) and I need to test as I go along, which means I highlight a section of code, submit and whilst that is running I edit other parts of the code - very productive! v13 seems to not allow this, which is very annoying. I've been through all the options to see how I can change this and there is nothing.

Has anybody ever loaded idxformat 8 cobol files with FastLoad?
I have loaded idxformat 4 files succesfully but when reading the idxformat 8 files looks like the inmod doesnt know where a record ends and the other begins.

This is the first time I work with cobol files so any info you can give on this kind of files is appreciated

I can export some fairly large queries in SQL Assistant JAVA edition, but there appears to be a soft limit somewhere around 20 Meg where it just cuts off and creates an empty file. The results appear to be accurate, the export unwilling.

When we have a SQL statement all on one line, it works fine, but when I add line breaks (for readability) or I import SQL with line breaks, it will throw the following:

[Teradata Database] [TeraJDBC] [Error 3706] [SQLState 42000] Syntax error: expected something between the beginning of the request and the 'from' keyword.

Hi there,
We are having a problem here at our site for some time and we are wondering if anybody else has had a similiar problem.
The problem is a locking issue we are having where the system deadlocks where the only remedy seems to be a database restart or by a manual job abort.

Here are some instances where it has occured:

-Arcmain acquires locks on dictionary tables triggering deadlock with any user action needing access to the same tables. The system hangs until restarted or one of the jobs holding a lock is aborted.

Hi there,
I'm looking to run the SHOWLOCKS utility at regular intervals.For this I am using the Teradata Manager Scheduler,and have a recorded script called 'CHECK_TEST_LOCKS'.

At the moment I can run it and have the results sent out to a text file:

rcons.exe -S CHECK_TEST_LOCKS -O c:Output.txt

Hi Gurus,

I got an inmod for processing one feed file and eliminate some unwanted rows and then feed the records to fastload. But I am not able to compile it and use it. I have GCC (GNU C Compiler) and TCC (Tiny C Compiler) on my windows macine and I got Fastload utility also installed. I tried to compile the code into a DLL and I succeded in that using below commands.

gcc -c mydll.c
gcc -shared -o mydll.dll mydll.o

But I don't know whats next....

Do I have to compile it in a separate way and Do I need to link it to fastload somewhere?


I have a file that contains a Employer record ("M"), followed by multiple Employee records ("B"). There is nothing in the Employee record that ties it to the Employer record other than the fact if follows the Employer record in the text file.

My question:

Is there a way (in Mload, or BTEQ), when I am loading this data when I read the "M" record (employer) I can retain the employer_id and assign it to every "B" (employee) record that follows until I encounter another "M" record? I can do this in SAS and VB, but I would prefer keeping in SQL if possible.

Record Example:

Please advise the privileges needed by TTU13 TQS Client ( sometimes called TDS Client ).

We will install package on Windows 7 with users only having the read only writes privilege.

All is fine after installing SQLAssistant. No problem installation or running the programs.

After installing TQS Client. When running SQLAssistant or TQSClient an error message appears when the program attempts to open a shared memory segment.

If the user is privileged we can set the "run as administrator" and it will work fine.

Hi All,

Can anybody clear on below:

We are trying to load 3 files in 3 different tables within same multiload script.
As a part of impact analysis we have been asked if suppose 1 of file loading failed...will all the insert statements will be rolled back?
Basically we want to know :
1)These 3 inserts will be executed parallely or sequentiallly at RDBMS level?
2)Will these be part of same BT/ET or 3 different BT/ET will be submiited?
3)can we load multiple files into same table within same mload....will that be sequential or parallel....

I am trying to run Tlogview in an MVS job in similar terms as given in TPT Reference guide.

My TPT is running fine, but nothing seems to happen when TLOGVIEW step runs.

Can someone please post an example of how to run it in MVS ?

Can someone help me with all the parameters and options available in tlogview command for reading TPT logs.

Hi All,

Is there a way I can load output from a XML process into Teradata? This is new ground for me, so I have no clue how to even start looking for a solution.



Can some one please provide a sample script?i tried in many ways but couldnt get it.Your help is really appreciated.

TPT13 installed on windows 2003 x64. I installed both x32 and x64 TPT13.
After installation, the env variable pointed to x64 version.
COPLIB=C:\Program Files\Teradata\Client\13.0\CLIv2\
DataConnectorLibPath=C:\Program Files (x86)\Teradata\Client\13.0\bin
TWB_ROOT=C:\Program Files\Teradata\Client\13.0\Teradata Parallel Transporter
_MSGCATLOCATION=C:\Program Files\Teradata\Client\13.0\Teradata Parallel Transporter\msg64


Hi All,

I am encountering a strange problem with my Mload script. I have specified an ERRLIMIT of 1 record in mload script to limit the number of records being rejected into ET table and thus making the script fail.

ERRLIMIT 1 /* Should make the job fail if more than 1 record is rejected */

When i run the script all the records go into ET table but still MLOAD runs with a RC=0. ET table has error 2666 for PKG_ESTB_DT field . Now I know 2666 error and can resolve the issue but i am worried as to why it is returning RC=0 even after setting ERRLIMIT to 1 record?

Appreciate any help.


Please can someone send a sample script to export the columnname as the header row using FastExport / TPT Export.?
I am able to do this using Bteq Export, but facing difficulty with FastExport/TPT Export.


Hi Gurus,
I'm quite new to Teradata. I need to provide a real time solution that will be able to load about 10+ billion records a day into a table with about 150 columns, while users are able to query the table.
We are currently able to do about 7-8 billion a day using a different database. And we need to do 10+ so that is why we are investigating Teradata.

The data is received in text files that come in through as the day goes by.
We need to load the files as they come in.

Everyone says there are 5 phases in MLOAD and 2 phases in FLOAD
I am little bit puzzled
As FLOAD will also do the below steps
1. logon ,session ,logtable, UV table ET table creations as Preliminary phase in MLOAD
2. syntax checking and sending the SQL to the TD server as DML translation phase in MLOAD

Hi All,

I am new to teradata and working putting image files in teradata.I have successfully being able to do it using sql assistant.I was trying to check if i can do the same using BTEQ.I had been struggling with it and need your help and suggestions....

Here is my table which i created:
create table ALL_WKSCRATCHPAD_DB.devX_lob (
id varchar (80),
mime_type varchar (80),
binary_lob binary large object
)unique primary index (id);

The bteq file which i am using luks like ths:

.set width 132
.logon usrname,pwd

delete from ALL_WKSCRATCHPAD_DB.devX_lob all;

will the fload and mload take same time to load an empty table,wl there be any appreciable difference in load timing.


When using SAS/ACCESS to Teradata on DELL/XP with TTU7.5, the following
error may be encountered:

ERROR: Teradata connection: CLI2: ICULOADLIBERR(527): Error loading ICU libraries. .


I am using the bteq import export option,I have followed the following steps in order:

Step 1
export data from table to file

.set width 900
.export data file=myoutput.txt
Select item_id from
sample 10;

.export rest;

Step 2
creating a new table

create table db.tab2
(item_id integer)

Step 3
import data from file to table

.import data file=myoutput.txt,skip=3
.quiet on
.repeat *
using item_id(integer)
insert into db.tab2


I just installed version 13 and found that I can copy the answerset and paste into a text file, but once I hover over an Excel file the clipboard reverts to the previous contents. Has anyone come across that one?


I am using Fast export to copy a Teradata table to a flat file on a windows server.

.logtable database.logtable_tablename;
.logon terdata_box/user_id,password;
.begin export sessions 2;
.export outfile myoutputfile.txt RECORD MODE FORMAT TEXT;
select *
from databse.table
where date_column ge cast ('01/01/2010' as date format 'dd/mm/yyyy')
and key_column eq 999999999999
.end export;

The query runs fine in sql assistant producing a readable text file. When I run fexp.exe the output file has a lot of unreadable rubbish in it.

Hi all,

while mloading single table with multiple file,does teradata load these files parallely or one after the other.


Does fastload/fastexport has windows 64-bit version?