1550 - 1600 of 2643 tags for Tools

Pages

I have two questions:1. What is an easy way to tell how many loader slots are currently tied up? Is there a query vs DBC that can be run?2. Why do the MLOAD sessions always show as IDLE on session monitor?

I have a bteq script that uses import command to load data from a text file to a Teradata table..set quiet on;.import vartext ',' file = \\MyServer\Proj\filename.txt.repeat *using BUSINESS_TYPE (varchar(255)),ACCOUNT_NB (varchar(255)), CNTRCT_DT (varchar(255))INSERT INTO ZALTS1RV.ZALV281_QRMEUR_DRO( BUSINESS_TYPE ,ACCOUNT_NB ,CNTRCT_DT ) VALUES( :BUSINESS_TYPE , :ACCOUNT_NB , :CNTRCT_DT );The script ran successfully and gave the output message : *** Finished at input row 5455 at Wed Feb 10 15:03:50 2010 *** Total number of statements: 5455, Accepted : 5429, Rejected : 26 Is there anyway to capture those rejected rows and write into another file ?Thanks

I have a decimal (17,2) field which has a default value of 999999999999999.99 (9(15).((2)). When mload'ing that data to teradata the value is being rounded to 1000000000000000.00 (1 followed by 15 0's.00) which is causing a decimal overflow error. Also the default value is being modified.I tried the cast statement for similar data in TD and it did the same. The cast statements that I tried are as below:select CAST(999999999999999.99 as decimal(17,2)) as test ;( 9(15).9(2) ) gives 1000000000000000.00 as output but, select CAST(99999999999999.99 as decimal(17,2)) as test;( 9(14).9(2) ) gives 99999999999999.99 ( 9(14).9(2)) Please let me know if there is a way to have it mloaded as-is.

After installing .NET 3.5 and Mapinfo 10 SQL Assistant 13 will freeze when running a query a second time from the query window. You can open a new window run a query, and then the next query will freeze. It just keeps showing the query is running, but in the back ground the query has completed. Hitting the cancel button does not work.

I would like to capture the number of rows inserted. I need to do this all in a stored procedure. Here is my sql for inserting the rows,Insert into DEV_CORE.TABLE1(SurrogateKey, COL1, COL2...)Select SURROGATEKEY,COL1,COL2From DEV_STG.TABLE1 WHERE NOT EXISTS ( SELECT 1 FROM DEV_CORE.TABLE1 as CORE1 WHERE CORE1.COL1= STG.COL2 AND CORE1.COL2= STG.COL2 )So once the records are inserted i need to find out how many were inserted. I m thinking count* or some how use row_number ?? Please help. Thanks.

Hi,I am looking for anyone who can provide information on their experiences with xml services.Just comments surrounding issues like.Installation, any problems / special requirementsThe impact on the nodes - any additional overheads that should be considered.Any issues in the useage.Many Thanks.

Hi FriendHelp me to solve the following questionWhich of the following is a statement that is true regarding temporary space? A. Temporary space is spool space currently not used. B. Temporary space is permanent space currently not used. C. Temporary space is subtracted from SysAdmin. D. Temporary space is assigned at the table level.

I can no longer run SQL Assistant from a command line since upgrading to Version 13.0. I am using a -C parameter to name an ODBC data source, but since SQL Assistant now starts up with a default of Teradata.net, I'm thinking that it does not recognize the data source. Any suggestions?

Hi AllWhat is the difference between NUSI and full table scan?RegardsKIRAN

Hi All,Can tpump load data on a real time basis? as per the document provided by teradata we can use tpump for real time dataware housing .in tpump we read from a file which is in a predefined format.

We upgraded to a new version of Teradata SQL Assistant - version 13. Users are connecting through an ODBC connection to an Oracle database, then Adding a new database in the Database Explorer. They can drill on the Tables folder, expand a specific Table, BUT when they try to expand the Columns folder, they get the following error: "The input string was not in the correct format."They also get this error message when using the Tolls -> List Columns option. They have Read access to the table and can use the table and columns in a SQL command. But they can't see the columns of a given table.Any ideas on this?

I'm new to Mload Utility programming....Need an sample script which updates, inserts or delete on 2 or more tables from a single source file.Thanks in advance.Learntera

Hello Friends,we are planning to purchase Datastage 8.0 and install on Z-linux.I have a few questions, and I appreciate if some one can provide some info:1) Can we connect to Teradata V2R5 using Datastage 8.0 on Z-linux?2) if we have to upgrade our Teradata version, what is the version that works with Datastage 8.0 on Z-linux?Thanks

I have a table with a BLOB column containing a '.ZIP' file.I am able to manually fire a select and save it to a by providing the required file-.I just need a simple command to do it automatically where I can hardcode the PATH and EXTENSION to which I can save it.It can be incorporated either in a Bteq or a Stored Proc...

Hi,I have installed the Teradata demo version 12.0 in the Windows vista.When i try to access the BTEQ or any other utility for instance i have Login problem.Q1. Is there any default userid and Password to login to BTEQ? I had triedEnter your logon or BTEQ command:.login tduserPassword :tduserThe message i get it *** *** CLI error: MTDP: EM_NOHOST(224): name not in HOSTS file or names database. *** Return code from CLI is: 224 *** Error: Logon failed! *** Total elapsed time was 10 seconds. Teradata BTEQ 08.02.00.00 for WIN32. Enter your logon or BTEQ command:I am new to this BTEQ. Kindly help me in resolving this issue. Any help would be greatly appreciated.

Hi All,I'm trying to use TPT to load data directly from an Oracle table to Teradata. BUT.. I'm having a little problem with numeric data types.In oracle, I have a lot of different NUMBER(10,0) and other columns.Lets take one for example a column called DURATION. The actual data in this column fits in an INTEGER type easily. So in the SCHEMA in the TPT script file and in Teradata, i identified it as INTEGER.Now when running the TPT script, i get this error:Error: Row 1: data from source column too large for target columnColumn: DURATION, Data: '3'And it is like this for every Numeric column. I even tried to put in everywhere the exactly the same data type, NUMBER(10,0) in Oracle, DECIMAL(10,0) in Teradata and the TPT SCHEME, but no luck. Tried using DECIMAL(38,0) - no luck either. same problem.Is it an Oracle ODBC driver problem? (Oracle version is 10)The only way i see is to char everything and then in the later ETL stages to cast them back to whatever i need.Any help would be great!

Hi All,I have a multiload error in datastage. UTF8 the input and the charset of the muliload script too.I have 6706 and 6705 errors in Teradata. What can I do to supress this type of problems?I don't have staging table for this.

Hi All,
I'm trying to use TPT to load data directly from an Oracle table to Teradata. BUT.. I'm having a little problem with numeric data types.
In oracle, I have a lot of different NUMBER(10,0) and other columns.
Lets take one for example a column called DURATION. The actual data in this column fits in an INTEGER type easily. So in the SCHEMA in the TPT script file and in Teradata, i identified it as INTEGER.
Now when running the TPT script, i get this error:

Error: Row 1: data from source column too large for target column
Column: DURATION, Data: '3'

hican any one tell me how to insert the number of rows effected by a sql statement in a bteq script in to a table..can i activitycount for this....i want to insert the number of records inserted with in the same script...thanksmehaboob

Hi Guys,I need to know how to load a table using multiple files in a single script using fastload.If you know any link or thread can also help.Thanks in advance,Kapil

hi all,can anyone tell me the best way in updating a record ( with efficiency)delete & insert or update

Hi - Is there anywhere that I can download just SQL Assistant 12.0 from? I've got a slow/unreliable internet connection, and it will take me well over an hour to download the full Teradata Express 12.0 installation package. I've already downloaded and tried SQL Assistant 13.0, but it doesn't do what I need. Thanks for your help.

hi i need to insert data into one table from 2 different tablesi m using like thisinsert into table1 values(firstname,phoneno,lastname)select fname,(select t2.phonenumber from table2 t2,table3 t3 where t2.firstname=t3.fname),lname from table 3;i am inserting firstname and lastname from table 3 and i m inserting phoneno from table2 but here i m getting a s

Hello ...

I've downloaded and configured the 40Gb version of TDE13.

I've not used teradata since V2R5, and the database, all the tools and documentation were pre-configured for me ...

I have several questions:
- are the PDF files included in the VM or are they a separate download
- I'm able to start the instance and login with BTEQ, though logging in takes 45 seconds (repeatably). I've made the change to /etc/hosts for associating my IP address with dbccop1, but am at a loss as to how to speed up the login within the VM.

Every developer has a favorite text editor which explains why there are so many available (see http://en.wikipedia.org/wiki/List_of_text_editors). My new favorite text editor is NotePad++, an open source application available under the GNU General Public License. It is primarily distributed for Windows but is likely to run on Linux/Mac using something like WINE.

Hi,I have a column ISDEFAULT BYTEINT. When i run the script it gives me the error DATACONNECTOR_OPERATOR: Data byte count error. Expected 24919, received 1.please help. I m using a TPT script. Thanks.

Hello,I m fairly new to TPT so here's my question. I hope i can get an answer as it would help me out tremendously. I need to load data into a BYTE(16) data type. I have a string of hex values (ie 'C9A86586E4EB7D4C9B8C2EA41416E67F') how can i load this into a BYTE(16) data type field.

When I use Case Statement in Fast Export Script, it got failed; Even it works fine if we directly use it in Teradata Sql Assistance.Please someone tell me why it happens. (Does fast Export not support the Case Clause)select (CASE WHEN length(trim(cast(BASE.Link_Prof_center_Hie_cd as CHAR(10))))= 4 THEN '000'||Trim(BASE.Link_Prof_center_Hie_cd) ELSE BASE.Link_Prof_center_Hie_cd END)from ecr_pims_stg.temp_units_like_fields BaseError:22:00:27 UTY8724 Select request submitted to the RDBMS.**** 22:00:27 UTY8713 RDBMS failure, 3706: Syntax error: expected something between '(' and the 'trim' keyword. ======================================================================== = = = Logoff/Disconnect = = = ========================================================================**** 22:00:27 UTY6215 The restart log table has NOT been dropped.**** 22:00:28 UTY6212 A successful disconnect was made from the RDBMS.Thanks in advance!!!

Does anyone have some working PM API C code they would like to share? I'm beating my head against the supplied sample code.

Every time I run it I get: "Create Monitor Object failed! 340: Class not registered".

Your help and suggestions are appreciated.

Is there the ability to do conditional branching in TPT (similar to .IF operator in BTEQ)? I have looked in the ver 12 documentation and don't see anything like that and I am hoping I just missed it.

R

while executing any query, sql assistant is throwing an error msg like 'AMP DOWN:THE REQUEST AGAINST NON FALLBACK (TABLENAME) CAN NOT BE DONE.pls suggest me a solution.Thx in advance.

Hi All,

My Fastexport Script:-

.LOGTABLE XXXX.XX_test_exp_LOG;
.LOGON XXXX/XXXXX,XXXXX;
.BEGIN EXPORT SESSIONS 16 TENACITY 4 SLEEP 6;
.EXPORT OUTFILE /abc/def/test_exp.dat;
LOCKING XXXX.test_exp FOR ACCESS
SELECT
TRIM(COALESCE(CAST(col1 AS VARCHAR(12)),'')) || '|' ||
TRIM(COALESCE(CAST(col2 AS VARCHAR(10)),'')) || '|' ||
TRIM(COALESCE(CAST(col3 AS VARCHAR(5)),''))
FROM XXXX.test_exp;
.END EXPORT;
LOGOFF;

Data in table.
5 e e
3 c c
4 d d
1 a a
2 b b

Data exported by using Fastexport.
^H^@^@^E^@5|e|e
^H^@^@^E^@3|c|c
^H^@^@^E^@4|d|d
^H^@^@^E^@1|a|a

can someone tell how below datatypes in sql server are related in Teradata. How they can be conveted to valid TD types?What will be datatypes in TD equivalent to sql server?intTinyintSmallintbigintDecimal(18,5)numeric[p[,s]]floatrealsmallmoneymoneyVarchar(10)Char(4)nvarchar(50)SmalldatetimeDatetime ...

I have about 500 tables with 5.4 TB data in sql server. We are migrating from sql server to Teradata. What could be best approach for doing so. Is oledb tool is good to load data from sql server to TD? How fast it will be?

Can you use Wildcard characters in a MLoad .Import statement?Say I have several files I am MLOading together, like below. Is there a way of substituting a wildcard so that I would not have to list each file in an import statement? .IMPORT INFILE "C:\Master_File_A" FORMAT TEXT LAYOUT Master_I APPLY INSERTS Where ID_TYPE IN ('I','E' , 'B' ) ;.IMPORT INFILE "C:\Master_File_B" FORMAT TEXT LAYOUT Master_I APPLY INSERTS Where ID_TYPE IN ('I','E' , 'B' ) ;.IMPORT INFILE "C:\Master_File_C" FORMAT TEXT LAYOUT Master_I APPLY INSERTS Where ID_TYPE IN ('I','E' , 'B' ) ;Is there a way of substituting a wildcard so that I would not have to list each file in an import statement? I tried this substitution, but got an error. Is there a way to do this in MLoad?.IMPORT INFILE "C:\Master_File_*" FORMAT TEXT LAYOUT Master_I APPLY INSERTS Where ID_TYPE IN ('I','E' , 'B' ) ;

How can I define in the FASTEXPORT script the first row in the exported file to be the names of the exported columns ?

This issue is really throwing me off. Everytime I click on another query tab, it jumps to the top of the query and does not stay at the position that you were last viewing. This is a real pain when jumping between large amounts of code.Also, when you save a tab, it does not rename it until you close and reopen it.Are there fixes?

UPDATE STG FROM MRD_ETL_SIT.RPT_GCI_FMLY_STG STG,( SELECT S.PTY_ID,ID1.PTY_ID_NO AS FMLY_GCI_NO, ID2.PTY_ID_NO AS SUP_FMLY_GCI_NO,ID3.PTY_ID_NO AS HQ_GCI_NO FROM MRD_ETL_SIT.RPT_GCI_FMLY_STG S LEFT OUTER JOIN MRD_SIT.PTY_ID ID1 ON (S.FMLY_PTY_ID=ID1.PTY_ID AND ID1.PTY_ID_TYP_CD='GCI' AND ID1.EFECT_END_DT='9999-12-31') LEFT OUTER JOIN MRD_SIT.PTY_ID ID2 ON (S.SUP_FMLY_PTY_ID=ID2.PTY_ID AND ID2.PTY_ID_TYP_CD='GCI' AND ID2.EFECT_END_DT='9999-12-31') LEFT OUTER JOIN MRD_SIT.PTY_ID ID3 ON (S.HQ_PTY_ID=ID3.PTY_ID AND ID3.PTY_ID_TYP_CD='GCI' AND ID3.EFECT_END_DT='9999-12-31') ) DT SET FMLY_GCI_NO = DT.FMLY_GCI_NO ,SUP_FMLY_GCI_NO = DT.SUP_FMLY_GCI_NO ,HQ_GCI_NO = DT.HQ_GCI_NO WHERE (STG.PTY_ID=DT.PTY_ID AND DT.PTY_ID<>'90004653704');Explanation 1) First, we lock a distinct MRD_ETL_SIT."pseudo table" for write on a RowHash to prevent global deadlock for MRD_ETL_SIT.RPT_GCI_FMLY_STG. 2) Next, we lock a distinct MRD_SIT."pseudo table" for read on a RowHash to prevent global deadlock for MRD_SIT.ID3. 3) We lock MRD_ETL_SIT.RPT_GCI_FMLY_STG for write, and we lock MRD_SIT.ID3 for read. 4) We execute the following steps in parallel. 1) We do an all-AMPs RETRIEVE step from a single partition of MRD_SIT.ID3 with a condition of ("MRD_SIT.ID3.EFECT_END_DT = DATE '9999-12-31'") with a residual condition of ( "(MRD_SIT.ID3.PTY_ID_TYP_CD = 'GCI') AND (MRD_SIT.ID3.EFECT_END_DT = DATE '9999-12-31')") into Spool 2 (all_amps) (compressed columns allowed), which is built locally on the AMPs. Then we do a SORT to order Spool 2 by row hash. The size of Spool 2 is estimated with low confidence to be 2,366,656 rows. The estimated time for this step is 0.28 seconds. 2) We do an all-AMPs RETRIEVE step from MRD_ETL_SIT.S by way of an all-rows scan with a condition of ("MRD_ETL_SIT.S.PTY_ID <> 90004653704.") into Spool 3 (all_amps) (compressed columns allowed), which is redistributed by hash code to all AMPs. Then we do a SORT to order Spool 3 by row hash. The size of Spool 3 is estimated with high confidence to be 2,364,187 rows. The estimated time for this step is 1.77 seconds. 5) We do an all-AMPs JOIN step from Spool 2 (Last Use) by way of a RowHash match scan, which is joined to Spool 3 (Last Use) by way of a RowHash match scan. Spool 2 and Spool 3 are right outer joined using a merge join, with a join condition of ("HQ_PTY_ID = PTY_ID"). The result goes into Spool 4 (all_amps) (compressed columns allowed), which is redistributed by hash code to all AMPs. Then we do a SORT to order Spool 4 by row hash. The size of Spool 4 is estimated with low confidence to be 792,803 rows. The estimated time for this step is 0.17 seconds. 6) We execute the following steps in parallel. 1) We do an all-AMPs RETRIEVE step from Spool 4 by way of an all-rows scan into Spool 7 (all_amps) (compressed columns allowed), which is duplicated on all AMPs. Then we do a SORT to order Spool 7 by row hash. The size of Spool 7 is estimated with low confidence to be 185,515,902 rows. The estimated time for this step is 1 minute and 12 seconds. 2) We do an all-AMPs RETRIEVE step from a single partition of MRD_SIT.ID2 with a condition of ("MRD_SIT.ID2.EFECT_END_DT = DATE '9999-12-31'") with a residual condition of ( "(MRD_SIT.ID2.PTY_ID_TYP_CD = 'GCI') AND (MRD_SIT.ID2.EFECT_END_DT = DATE '9999-12-31')") into Spool 8 (all_amps) (compressed columns allowed), which is built locally on the AMPs. Then we do a SORT to order Spool 8 by row hash. The size of Spool 8 is estimated with low confidence to be 2,366,656 rows. The estimated time for this step is 0.28 seconds. 7) We do an all-AMPs JOIN step from Spool 8 (Last Use) by way of a RowHash match scan, which is joined to Spool 7 (Last Use) by way of a RowHash match scan. Spool 8 and Spool 7 are joined using a merge join, with a join condition of ("SUP_FMLY_PTY_ID = PTY_ID"). The result goes into Spool 9 (all_amps) (compressed columns allowed), which is redistributed by hash code to all AMPs. Then we do a SORT to order Spool 9 by row hash. The size of Spool 9 is estimated with low confidence to be 792,803 rows. The estimated time for this step is 14.99 seconds. 8) We execute the following steps in parallel. 1) We do an all-AMPs JOIN step from Spool 9 (Last Use) by way of a RowHash match scan, which is joined to Spool 4 (Last Use) by way of a RowHash match scan. Spool 9 and Spool 4 are right outer joined using a merge join, with a join condition of ("Field_1 = Field_1"). The result goes into Spool 10 (all_amps) (compressed columns allowed), which is redistributed by hash code to all AMPs. Then we do a SORT to order Spool 10 by row hash. The size of Spool 10 is estimated with low confidence to be 792,803 rows. The estimated time for this step is 0.67 seconds. 2) We do an all-AMPs RETRIEVE step from a single partition of MRD_SIT.ID1 with a condition of ("MRD_SIT.ID1.EFECT_END_DT = DATE '9999-12-31'") with a residual condition of ( "(MRD_SIT.ID1.PTY_ID_TYP_CD = 'GCI') AND (MRD_SIT.ID1.EFECT_END_DT = DATE '9999-12-31')") into Spool 13 (all_amps) (compressed columns allowed), which is built locally on the AMPs. Then we do a SORT to order Spool 13 by row hash. The size of Spool 13 is estimated with low confidence to be 2,366,656 rows. The estimated time for this step is 0.28 seconds. 9) We do an all-AMPs JOIN step from Spool 10 (Last Use) by way of a RowHash match scan, which is joined to Spool 13 (Last Use) by way of a RowHash match scan. Spool 10 and Spool 13 are left outer joined using a merge join, with a join condition of ("FMLY_PTY_ID = PTY_ID"). The result goes into Spool 1 (all_amps) (compressed columns allowed), which is redistributed by hash code to all AMPs. The size of Spool 1 is estimated with low confidence to be 792,803 rows. The estimated time for this step is 0.57 seconds. 10) We do an all-AMPs RETRIEVE step from Spool 1 (Last Use) by way of an all-rows scan with a condition of ("PTY_ID <> 90004653704. ") into Spool 17 (all_amps) (compressed columns allowed), which is redistributed by hash code to all AMPs. The size of Spool 17 is estimated with low confidence to be 792,803 rows. The estimated time for this step is 0.50 seconds. 11) We do an all-AMPs JOIN step from MRD_ETL_SIT.RPT_GCI_FMLY_STG by way of an all-rows scan with a condition of ( "MRD_ETL_SIT.RPT_GCI_FMLY_STG.PTY_ID <> 90004653704."), which is joined to Spool 17 (Last Use) by way of an all-rows scan. MRD_ETL_SIT.RPT_GCI_FMLY_STG and Spool 17 are joined using a single partition hash join, with a join condition of ( "MRD_ETL_SIT.RPT_GCI_FMLY_STG.PTY_ID = PTY_ID"). The result goes into Spool 16 (all_amps), which is redistributed by hash code to all AMPs. Then we do a SORT to order Spool 16 by the sort key in spool field1. The size of Spool 16 is estimated with index join confidence to be 792,803 rows. The estimated time for this step is 1.10 seconds. 12) We do a MERGE Update to MRD_ETL_SIT.RPT_GCI_FMLY_STG from Spool 16 (Last Use) via ROWID. 13) Finally, we send out an END TRANSACTION step to all AMPs involved in processing the request. -> No rows are returned to the user as the result of statement 1.

I'm writing application, which i.a. inserts Unicode strings into Teradata. I don't know how I can see inserted data, to check wheter it was inserted properly. SQL Assitant doesn't support Unicode, I tried in BTEQ command EXPORT DATA FILE=..., but it wrote to file only strange symbols not in Unicode...

Looked through the Teradata Tools and Utilities TTU12 and TTU13 compatibility spreadsheets for information about Windows 7 compatibility with 32bit ODBC. Vista is shown but not Windows 7. I suspect it is compatible but need to show it is documented.Does anyone have any new information about this? Thanks,Stevereisiger.sm@pg.com

Hi,

Ever since i installed Teradata SQL Assistant 13 on my machine, I get this error whenever I open Teradata SQL Assistant:

Unable to open History table
Unknown
Unexpected Error. Refer to the following file for details:
C:\Documents and Settings\....\SQLAError.txt

The text in this file says:

12/7/2009 7:16:49 AM
SQLA Version: 13.0.0.7
System.Runtime.InteropServices.COMException
Unknown
at DAO.Database.OpenRecordset(String Name, Object Type, Object Options, Object LockEdit)

Hi,We need to design a data model for DWH with underlying database as Teradata. Prior to teradata we had MySQL and we used 'Enterprise Architect' data modelling tool.But EA does not support for teradata. Please suggest some tool(freeware) for Teradata data modelling.Regards,

When i recently ran the fastexport, the output file was a flat-file format,when i open the file data looks like this

Is there a cost for Teradata Query Scheduler? I have Teradata SQL Assistant 12.0 on my laptop at work. Thank youSharon

Hi,

We're using TPT Stream operator through Informatica 8.6. The operation I am trying to do is quite simple. Here is some outlines

1) I have a table with Identity column defined as UPI.
2) We're planning to do insert and update to the table based on Identity column
3) In Informatica, I used Update Strategy to flag the record is insert or update.
4) 2 streams are used. One for Insert and one for Update.

hi i am trying to load data from a pipe delimited file in to a table using fast load. All the rows are going to error table and the error table showing the error codes as 2679 and 6760 on two columns which are of varchar (32) and the data in the flat file looks like 'Std' and 'N'When i tried to find out the reason for the error it says like 2679 - error occurred due to conversion from numeric to char or char to numeric or bad character in file6760 - error occurred due to time stamp conversionbut data in file for the respected columns are characters and there are no decimal values or time stamp values.can u help out..plzthanks in advance..

Hi,Please let me know if you need any clarification regarding my yeterdays post.Regards,Savitha C V

Hi All,I need your help to solve my issue.Actually, we have created a file which having values like 200901,200902,200903.And also imported file with help of .import.example:.import vartext file=./PERD_ID.txtusing VAR_PERD_ID (VARCHAR(50))and VAR_PERD_ID varible has values like 200901,200902,200903.Query is not updateing any records when I pass parameter PERD_ID IN (:VAR_PERD_ID).Example: Update table.X(Selecttable1.a y,table2.b zfrom table1,table2where table1.a=table2.aand PERD_ID IN (:VAR_PERD_ID))T1where X.a=T1.y;something like that issue is update script runs fine without showing an error and updated completed with no rows changed.But inner query fetches some values so it should update with records.I ran query manually in teradata with hard coding parameter values likeSelecttable1.a y,table2.b zfrom table1,table2where table1.a=table2.aand PERD_ID IN (200901,200902,200903))It works fine but its not working when I use PERD_ID IN (:VAR_PERD_ID) in update script.Could you please anyone can help me out to find the issue?I am new to this BTEQ so pls let me know if you have any idea.Thank you in advance.

HI, I am loading 2 input files into Teradata in a single Multiload script, but multiload behaves unpredictably and loads the rows into error tables(both UV and ET). However, when loading the same files using same script but one at a time in 2 seperate loads it works fine.