1550 - 1600 of 2622 tags for Tools

Hi - Is there anywhere that I can download just SQL Assistant 12.0 from? I've got a slow/unreliable internet connection, and it will take me well over an hour to download the full Teradata Express 12.0 installation package. I've already downloaded and tried SQL Assistant 13.0, but it doesn't do what I need. Thanks for your help.

hi i need to insert data into one table from 2 different tablesi m using like thisinsert into table1 values(firstname,phoneno,lastname)select fname,(select t2.phonenumber from table2 t2,table3 t3 where t2.firstname=t3.fname),lname from table 3;i am inserting firstname and lastname from table 3 and i m inserting phoneno from table2 but here i m getting a s

Hello ...

I've downloaded and configured the 40Gb version of TDE13.

I've not used teradata since V2R5, and the database, all the tools and documentation were pre-configured for me ...

I have several questions:
- are the PDF files included in the VM or are they a separate download
- I'm able to start the instance and login with BTEQ, though logging in takes 45 seconds (repeatably). I've made the change to /etc/hosts for associating my IP address with dbccop1, but am at a loss as to how to speed up the login within the VM.

Every developer has a favorite text editor which explains why there are so many available (see http://en.wikipedia.org/wiki/List_of_text_editors). My new favorite text editor is NotePad++, an open source application available under the GNU General Public License. It is primarily distributed for Windows but is likely to run on Linux/Mac using something like WINE.

Hi,I have a column ISDEFAULT BYTEINT. When i run the script it gives me the error DATACONNECTOR_OPERATOR: Data byte count error. Expected 24919, received 1.please help. I m using a TPT script. Thanks.

Hello,I m fairly new to TPT so here's my question. I hope i can get an answer as it would help me out tremendously. I need to load data into a BYTE(16) data type. I have a string of hex values (ie 'C9A86586E4EB7D4C9B8C2EA41416E67F') how can i load this into a BYTE(16) data type field.

When I use Case Statement in Fast Export Script, it got failed; Even it works fine if we directly use it in Teradata Sql Assistance.Please someone tell me why it happens. (Does fast Export not support the Case Clause)select (CASE WHEN length(trim(cast(BASE.Link_Prof_center_Hie_cd as CHAR(10))))= 4 THEN '000'||Trim(BASE.Link_Prof_center_Hie_cd) ELSE BASE.Link_Prof_center_Hie_cd END)from ecr_pims_stg.temp_units_like_fields BaseError:22:00:27 UTY8724 Select request submitted to the RDBMS.**** 22:00:27 UTY8713 RDBMS failure, 3706: Syntax error: expected something between '(' and the 'trim' keyword. ======================================================================== = = = Logoff/Disconnect = = = ========================================================================**** 22:00:27 UTY6215 The restart log table has NOT been dropped.**** 22:00:28 UTY6212 A successful disconnect was made from the RDBMS.Thanks in advance!!!

Does anyone have some working PM API C code they would like to share? I'm beating my head against the supplied sample code.

Every time I run it I get: "Create Monitor Object failed! 340: Class not registered".

Your help and suggestions are appreciated.

Is there the ability to do conditional branching in TPT (similar to .IF operator in BTEQ)? I have looked in the ver 12 documentation and don't see anything like that and I am hoping I just missed it.

R

while executing any query, sql assistant is throwing an error msg like 'AMP DOWN:THE REQUEST AGAINST NON FALLBACK (TABLENAME) CAN NOT BE DONE.pls suggest me a solution.Thx in advance.

Hi All,

My Fastexport Script:-

.LOGTABLE XXXX.XX_test_exp_LOG;
.LOGON XXXX/XXXXX,XXXXX;
.BEGIN EXPORT SESSIONS 16 TENACITY 4 SLEEP 6;
.EXPORT OUTFILE /abc/def/test_exp.dat;
LOCKING XXXX.test_exp FOR ACCESS
SELECT
TRIM(COALESCE(CAST(col1 AS VARCHAR(12)),'')) || '|' ||
TRIM(COALESCE(CAST(col2 AS VARCHAR(10)),'')) || '|' ||
TRIM(COALESCE(CAST(col3 AS VARCHAR(5)),''))
FROM XXXX.test_exp;
.END EXPORT;
LOGOFF;

Data in table.
5 e e
3 c c
4 d d
1 a a
2 b b

Data exported by using Fastexport.
^H^@^@^E^@5|e|e
^H^@^@^E^@3|c|c
^H^@^@^E^@4|d|d
^H^@^@^E^@1|a|a

can someone tell how below datatypes in sql server are related in Teradata. How they can be conveted to valid TD types?What will be datatypes in TD equivalent to sql server?intTinyintSmallintbigintDecimal(18,5)numeric[p[,s]]floatrealsmallmoneymoneyVarchar(10)Char(4)nvarchar(50)SmalldatetimeDatetime ...

I have about 500 tables with 5.4 TB data in sql server. We are migrating from sql server to Teradata. What could be best approach for doing so. Is oledb tool is good to load data from sql server to TD? How fast it will be?

Can you use Wildcard characters in a MLoad .Import statement?Say I have several files I am MLOading together, like below. Is there a way of substituting a wildcard so that I would not have to list each file in an import statement? .IMPORT INFILE "C:\Master_File_A" FORMAT TEXT LAYOUT Master_I APPLY INSERTS Where ID_TYPE IN ('I','E' , 'B' ) ;.IMPORT INFILE "C:\Master_File_B" FORMAT TEXT LAYOUT Master_I APPLY INSERTS Where ID_TYPE IN ('I','E' , 'B' ) ;.IMPORT INFILE "C:\Master_File_C" FORMAT TEXT LAYOUT Master_I APPLY INSERTS Where ID_TYPE IN ('I','E' , 'B' ) ;Is there a way of substituting a wildcard so that I would not have to list each file in an import statement? I tried this substitution, but got an error. Is there a way to do this in MLoad?.IMPORT INFILE "C:\Master_File_*" FORMAT TEXT LAYOUT Master_I APPLY INSERTS Where ID_TYPE IN ('I','E' , 'B' ) ;

How can I define in the FASTEXPORT script the first row in the exported file to be the names of the exported columns ?

This issue is really throwing me off. Everytime I click on another query tab, it jumps to the top of the query and does not stay at the position that you were last viewing. This is a real pain when jumping between large amounts of code.Also, when you save a tab, it does not rename it until you close and reopen it.Are there fixes?

UPDATE STG FROM MRD_ETL_SIT.RPT_GCI_FMLY_STG STG,( SELECT S.PTY_ID,ID1.PTY_ID_NO AS FMLY_GCI_NO, ID2.PTY_ID_NO AS SUP_FMLY_GCI_NO,ID3.PTY_ID_NO AS HQ_GCI_NO FROM MRD_ETL_SIT.RPT_GCI_FMLY_STG S LEFT OUTER JOIN MRD_SIT.PTY_ID ID1 ON (S.FMLY_PTY_ID=ID1.PTY_ID AND ID1.PTY_ID_TYP_CD='GCI' AND ID1.EFECT_END_DT='9999-12-31') LEFT OUTER JOIN MRD_SIT.PTY_ID ID2 ON (S.SUP_FMLY_PTY_ID=ID2.PTY_ID AND ID2.PTY_ID_TYP_CD='GCI' AND ID2.EFECT_END_DT='9999-12-31') LEFT OUTER JOIN MRD_SIT.PTY_ID ID3 ON (S.HQ_PTY_ID=ID3.PTY_ID AND ID3.PTY_ID_TYP_CD='GCI' AND ID3.EFECT_END_DT='9999-12-31') ) DT SET FMLY_GCI_NO = DT.FMLY_GCI_NO ,SUP_FMLY_GCI_NO = DT.SUP_FMLY_GCI_NO ,HQ_GCI_NO = DT.HQ_GCI_NO WHERE (STG.PTY_ID=DT.PTY_ID AND DT.PTY_ID<>'90004653704');Explanation 1) First, we lock a distinct MRD_ETL_SIT."pseudo table" for write on a RowHash to prevent global deadlock for MRD_ETL_SIT.RPT_GCI_FMLY_STG. 2) Next, we lock a distinct MRD_SIT."pseudo table" for read on a RowHash to prevent global deadlock for MRD_SIT.ID3. 3) We lock MRD_ETL_SIT.RPT_GCI_FMLY_STG for write, and we lock MRD_SIT.ID3 for read. 4) We execute the following steps in parallel. 1) We do an all-AMPs RETRIEVE step from a single partition of MRD_SIT.ID3 with a condition of ("MRD_SIT.ID3.EFECT_END_DT = DATE '9999-12-31'") with a residual condition of ( "(MRD_SIT.ID3.PTY_ID_TYP_CD = 'GCI') AND (MRD_SIT.ID3.EFECT_END_DT = DATE '9999-12-31')") into Spool 2 (all_amps) (compressed columns allowed), which is built locally on the AMPs. Then we do a SORT to order Spool 2 by row hash. The size of Spool 2 is estimated with low confidence to be 2,366,656 rows. The estimated time for this step is 0.28 seconds. 2) We do an all-AMPs RETRIEVE step from MRD_ETL_SIT.S by way of an all-rows scan with a condition of ("MRD_ETL_SIT.S.PTY_ID <> 90004653704.") into Spool 3 (all_amps) (compressed columns allowed), which is redistributed by hash code to all AMPs. Then we do a SORT to order Spool 3 by row hash. The size of Spool 3 is estimated with high confidence to be 2,364,187 rows. The estimated time for this step is 1.77 seconds. 5) We do an all-AMPs JOIN step from Spool 2 (Last Use) by way of a RowHash match scan, which is joined to Spool 3 (Last Use) by way of a RowHash match scan. Spool 2 and Spool 3 are right outer joined using a merge join, with a join condition of ("HQ_PTY_ID = PTY_ID"). The result goes into Spool 4 (all_amps) (compressed columns allowed), which is redistributed by hash code to all AMPs. Then we do a SORT to order Spool 4 by row hash. The size of Spool 4 is estimated with low confidence to be 792,803 rows. The estimated time for this step is 0.17 seconds. 6) We execute the following steps in parallel. 1) We do an all-AMPs RETRIEVE step from Spool 4 by way of an all-rows scan into Spool 7 (all_amps) (compressed columns allowed), which is duplicated on all AMPs. Then we do a SORT to order Spool 7 by row hash. The size of Spool 7 is estimated with low confidence to be 185,515,902 rows. The estimated time for this step is 1 minute and 12 seconds. 2) We do an all-AMPs RETRIEVE step from a single partition of MRD_SIT.ID2 with a condition of ("MRD_SIT.ID2.EFECT_END_DT = DATE '9999-12-31'") with a residual condition of ( "(MRD_SIT.ID2.PTY_ID_TYP_CD = 'GCI') AND (MRD_SIT.ID2.EFECT_END_DT = DATE '9999-12-31')") into Spool 8 (all_amps) (compressed columns allowed), which is built locally on the AMPs. Then we do a SORT to order Spool 8 by row hash. The size of Spool 8 is estimated with low confidence to be 2,366,656 rows. The estimated time for this step is 0.28 seconds. 7) We do an all-AMPs JOIN step from Spool 8 (Last Use) by way of a RowHash match scan, which is joined to Spool 7 (Last Use) by way of a RowHash match scan. Spool 8 and Spool 7 are joined using a merge join, with a join condition of ("SUP_FMLY_PTY_ID = PTY_ID"). The result goes into Spool 9 (all_amps) (compressed columns allowed), which is redistributed by hash code to all AMPs. Then we do a SORT to order Spool 9 by row hash. The size of Spool 9 is estimated with low confidence to be 792,803 rows. The estimated time for this step is 14.99 seconds. 8) We execute the following steps in parallel. 1) We do an all-AMPs JOIN step from Spool 9 (Last Use) by way of a RowHash match scan, which is joined to Spool 4 (Last Use) by way of a RowHash match scan. Spool 9 and Spool 4 are right outer joined using a merge join, with a join condition of ("Field_1 = Field_1"). The result goes into Spool 10 (all_amps) (compressed columns allowed), which is redistributed by hash code to all AMPs. Then we do a SORT to order Spool 10 by row hash. The size of Spool 10 is estimated with low confidence to be 792,803 rows. The estimated time for this step is 0.67 seconds. 2) We do an all-AMPs RETRIEVE step from a single partition of MRD_SIT.ID1 with a condition of ("MRD_SIT.ID1.EFECT_END_DT = DATE '9999-12-31'") with a residual condition of ( "(MRD_SIT.ID1.PTY_ID_TYP_CD = 'GCI') AND (MRD_SIT.ID1.EFECT_END_DT = DATE '9999-12-31')") into Spool 13 (all_amps) (compressed columns allowed), which is built locally on the AMPs. Then we do a SORT to order Spool 13 by row hash. The size of Spool 13 is estimated with low confidence to be 2,366,656 rows. The estimated time for this step is 0.28 seconds. 9) We do an all-AMPs JOIN step from Spool 10 (Last Use) by way of a RowHash match scan, which is joined to Spool 13 (Last Use) by way of a RowHash match scan. Spool 10 and Spool 13 are left outer joined using a merge join, with a join condition of ("FMLY_PTY_ID = PTY_ID"). The result goes into Spool 1 (all_amps) (compressed columns allowed), which is redistributed by hash code to all AMPs. The size of Spool 1 is estimated with low confidence to be 792,803 rows. The estimated time for this step is 0.57 seconds. 10) We do an all-AMPs RETRIEVE step from Spool 1 (Last Use) by way of an all-rows scan with a condition of ("PTY_ID <> 90004653704. ") into Spool 17 (all_amps) (compressed columns allowed), which is redistributed by hash code to all AMPs. The size of Spool 17 is estimated with low confidence to be 792,803 rows. The estimated time for this step is 0.50 seconds. 11) We do an all-AMPs JOIN step from MRD_ETL_SIT.RPT_GCI_FMLY_STG by way of an all-rows scan with a condition of ( "MRD_ETL_SIT.RPT_GCI_FMLY_STG.PTY_ID <> 90004653704."), which is joined to Spool 17 (Last Use) by way of an all-rows scan. MRD_ETL_SIT.RPT_GCI_FMLY_STG and Spool 17 are joined using a single partition hash join, with a join condition of ( "MRD_ETL_SIT.RPT_GCI_FMLY_STG.PTY_ID = PTY_ID"). The result goes into Spool 16 (all_amps), which is redistributed by hash code to all AMPs. Then we do a SORT to order Spool 16 by the sort key in spool field1. The size of Spool 16 is estimated with index join confidence to be 792,803 rows. The estimated time for this step is 1.10 seconds. 12) We do a MERGE Update to MRD_ETL_SIT.RPT_GCI_FMLY_STG from Spool 16 (Last Use) via ROWID. 13) Finally, we send out an END TRANSACTION step to all AMPs involved in processing the request. -> No rows are returned to the user as the result of statement 1.

I'm writing application, which i.a. inserts Unicode strings into Teradata. I don't know how I can see inserted data, to check wheter it was inserted properly. SQL Assitant doesn't support Unicode, I tried in BTEQ command EXPORT DATA FILE=..., but it wrote to file only strange symbols not in Unicode...

Looked through the Teradata Tools and Utilities TTU12 and TTU13 compatibility spreadsheets for information about Windows 7 compatibility with 32bit ODBC. Vista is shown but not Windows 7. I suspect it is compatible but need to show it is documented.Does anyone have any new information about this? Thanks,Stevereisiger.sm@pg.com

Hi,

Ever since i installed Teradata SQL Assistant 13 on my machine, I get this error whenever I open Teradata SQL Assistant:

Unable to open History table
Unknown
Unexpected Error. Refer to the following file for details:
C:\Documents and Settings\....\SQLAError.txt

The text in this file says:

12/7/2009 7:16:49 AM
SQLA Version: 13.0.0.7
System.Runtime.InteropServices.COMException
Unknown
at DAO.Database.OpenRecordset(String Name, Object Type, Object Options, Object LockEdit)

Hi,We need to design a data model for DWH with underlying database as Teradata. Prior to teradata we had MySQL and we used 'Enterprise Architect' data modelling tool.But EA does not support for teradata. Please suggest some tool(freeware) for Teradata data modelling.Regards,

When i recently ran the fastexport, the output file was a flat-file format,when i open the file data looks like this

Is there a cost for Teradata Query Scheduler? I have Teradata SQL Assistant 12.0 on my laptop at work. Thank youSharon

Hi,

We're using TPT Stream operator through Informatica 8.6. The operation I am trying to do is quite simple. Here is some outlines

1) I have a table with Identity column defined as UPI.
2) We're planning to do insert and update to the table based on Identity column
3) In Informatica, I used Update Strategy to flag the record is insert or update.
4) 2 streams are used. One for Insert and one for Update.

hi i am trying to load data from a pipe delimited file in to a table using fast load. All the rows are going to error table and the error table showing the error codes as 2679 and 6760 on two columns which are of varchar (32) and the data in the flat file looks like 'Std' and 'N'When i tried to find out the reason for the error it says like 2679 - error occurred due to conversion from numeric to char or char to numeric or bad character in file6760 - error occurred due to time stamp conversionbut data in file for the respected columns are characters and there are no decimal values or time stamp values.can u help out..plzthanks in advance..

Hi,Please let me know if you need any clarification regarding my yeterdays post.Regards,Savitha C V

Hi All,I need your help to solve my issue.Actually, we have created a file which having values like 200901,200902,200903.And also imported file with help of .import.example:.import vartext file=./PERD_ID.txtusing VAR_PERD_ID (VARCHAR(50))and VAR_PERD_ID varible has values like 200901,200902,200903.Query is not updateing any records when I pass parameter PERD_ID IN (:VAR_PERD_ID).Example: Update table.X(Selecttable1.a y,table2.b zfrom table1,table2where table1.a=table2.aand PERD_ID IN (:VAR_PERD_ID))T1where X.a=T1.y;something like that issue is update script runs fine without showing an error and updated completed with no rows changed.But inner query fetches some values so it should update with records.I ran query manually in teradata with hard coding parameter values likeSelecttable1.a y,table2.b zfrom table1,table2where table1.a=table2.aand PERD_ID IN (200901,200902,200903))It works fine but its not working when I use PERD_ID IN (:VAR_PERD_ID) in update script.Could you please anyone can help me out to find the issue?I am new to this BTEQ so pls let me know if you have any idea.Thank you in advance.

HI, I am loading 2 input files into Teradata in a single Multiload script, but multiload behaves unpredictably and loads the rows into error tables(both UV and ET). However, when loading the same files using same script but one at a time in 2 seperate loads it works fine.

hii import data with MLOADLoading successfully done but i got msgs like this in Acquisition Phase, here is the part of out file**** 11:32:10 UTY1802 Processing Import Sequence 1, Source Sequence 200000.**** 11:32:10 UTY1802 Processing Import Sequence 1, Source Sequence 300000.**** 11:32:11 UTY1802 Processing Import Sequence 1, Source Sequence 400000.**** 11:32:11 UTY1802 Processing Import Sequence 1, Source Sequence 500000.**** 11:32:11 UTY1802 Processing Import Sequence 1, Source Sequence 600000.**** 11:32:11 UTY1802 Processing Import Sequence 1, Source Sequence 700000.**** 11:32:12 UTY1802 Processing Import Sequence 1, Source Sequence 800000.**** 11:32:12 UTY1802 Processing Import Sequence 1, Source Sequence 900000. Candidate records considered:........ 965758....... 965758 Apply conditions satisfied:.......... 965758....... 965758 Candidate records not applied:....... 0....... 0 Candidate records rejected:.......... 0....... 0Start : 11:31:44 - SAT NOV 21, 2009End : 11:33:44 - SAT NOV 21, 2009Highest return code encountered = '0'.Solutions to this problem will be appreciated.Thanks in advance

iam trying to load data in set and multiset tables with mload ( i tried both ways)i am getting the following error..**** 00:35:00 UTY0805 RDBMS failure, 2801: Duplicate unique prime key error in databasename_tablename_Log.

Hi,I want to export the teradata queries and their output into a text file.For doing so I am using the following commands: .EXPORT FILE=filename .SET WIDTH 3000 .SET TITLEDASHES OFF;SELECT col1 ,col2 from T1WHERE id='Z101';.EXPORT RESET;.LOGOFF.EXITEOFBut the above query fetches the data in the file (filename) as:col1 col2v1 v2... ...But I need the output in the file in this format:SELECT col1 ,col2 from T1WHERE id='Z101';col1 col2v1 v2... ...Solutions to this problem will be appreciated.Thanks in advance

HiI would like to the list of view names that are created on a particular table.For example, there are 2 views view1 and view2 that are created on a table Table1. Is there any query by which i can know that there are 2 views (view1 & view2) that are existing on table1.ThanksMadhavi.

Hi,

Does anyone know if final version of Java SQLA will contain database export (DDL generation) functionality, which was available in early beta version.

It was one of the best features which I can not find anymore ;(

Regards
Artur

Hi all,I installed Teradata SQL Assistant 13 on my windows 7 machine, but I can only execute one query.After the first query is done, I can't ever execute another query.All I can do is shut it down and run another Taredata SQL Assistant.Does anyone know how to solve this problem?thanks

Hi everyone,I am not able to configure teradata manager 12(came with teradata 12 express edition).I refered to many manuals and google for some time but dint do good.so can any one help me out with this.thanks

Does the compressed column play an impact in query execution, when it is involved in a left outer join? the column is compressed to a value for eg: '$'-Thank you

We are using Faslload in a series of ETL jobs, which need to be scheduled.When we double click the .CMD file, the job completed successfully, however, when the same file is scheduled on useing the task scheduler when the job reaches the fastload step, fastload throws an exception the debugger window appears if your logon to the server under the same usern

Hi,I need to select data for last seven days from the database that has data for nearly 20 years. Can someone please give me the code to extract data for last seven days using teradata SQL?Thanks in Advance.Regardslearninggeek

Hi ,

we have a faslload process, that runs fine when we double click a .CMD file in explorer, however, when we try to schedule the same job using the task schedule, fastload aborts with an error before even the logon command is procesed.

The fast load verion is 12.00.00.004, this is running on windows server 2003,

SQLA fails on Win 7 RC with

SQLA Version: 13.0.0.7
System.NullReferenceException
Object reference not set to an instance of an object.
at Teradata.SQLA.MainFrm.ToolbarMgr_ToolValueChanged(Object sender, ToolEventArgs e) in F:\ttu130_efix_snap\tdcli\qman\sqla\MainFrm.vb:line 640
at Infragistics.Win.UltraWinToolbars.UltraToolbarsManager.OnToolValueChanged(ToolEventArgs e)
at Infragistics.Win.UltraWinToolbars.UltraToolbarsManager.FireEvent(ToolbarEventIds id, EventArgs e)

Hi All,I am working with Teradata Fast load and Fast export scripts. I would like to know any site or book that can provide me with in depth knowledge of loading scripts. Kindly suggest.Thanks in Advance.RegardsVarray

While using BTEQ IMPORT I am getting the following error. I know that this is due to a data item not matching the definition, i.e. definition set to VARCHAR(5) and data item being 6 charaters.

What I would like to know is how to halt the code and exit with an error code. The objective being to stop the code from processing any more records as soon as an error code occurs. At the momment it carries on to process all the records: -

.SET ERROROUT STDOUT
.SET QUIET ON

DROP TABLE BUSINESS_SECTOR_TEAMS;

CREATE TABLE BUSINESS_SECTOR_TEAMS
(

Can I rollback in ANSI mode without Transient Journel?I have a requirement to update a table...if the number of records which are updated is not matched with the intended number of records it should not commit to DB.Is there any way to accomplish this other than using ANSI mode/Transient Journal??Coz..ours is a very small system with 20 AMPS.

Hi All,I need to write one Bteq script to update my table.Along with this I need to take the number of records it is going to update by running select statement before Update statement.Once update statement is complete..I want to see how many records this Update statement has updated...If the prior count and after counts are matching Commit this operation else through an error by releasing the lock and do a ROLLBACK.As per my knowledge I need to go for ANSI transaction mode...I'm very new in using ANSI mode in BTEQ...Could any one help me in providing sample BTEQ script for this scenario...Thanks In Advance Learn And Share...URSRavindra Red

hi,i installed the new version of windows (windows 7) . the version of teradata i tried to install was 12.1 . I am not able to install all teradata tools like sql assistant, administrator .etc .. ,. all the installations fail in windows 7 . Is there any way to sort this out ,,, can anyone help me out in this ???Thanks in advanceregards,Jesu

We have our production Teradata box connected to the production mainframe via a TDP channel. Our test Teradata box is connected to the test main frame via a different TDP channel.If I initiate a TPT job on the production mainframe, will the TPT script be able to copy data from the production Teradata box to the test Teradata box?

Lets say I have 10 tables in database ABC. I need to move the data in these 10 tables from our production box to our test box. The tables in production are on separate Teradata machine than the ones on test (teraprod vs teratest) but the tables themselves are exactly the same when it comes to fields and data types.I know I can create separate TPT scripts to move the data. My question is whether I can put all the script logic in a single JOB? I used the wizard to create a successful script, tried adding in another table and the script would no longer run (compilation error). Or am I forced to put each table in its own TPT script and execute each script.We are on v12.1 I did not find anything in the TPT user documentation.

Hi guys,I donot know the syntax for passing parameter to mload. I am trying to pass two strings as parameter to mload script file and use these string parameters in the script.something like mload [TableName, SOURCEFILENAME] <Path of sourcefilename. Here is my scenario:This is the part of mload script where i want to use the passed string(in INSERT STAMENT below). In TestTable(Teradata table) , there is a column called stream which contains the source file name from which data is loaded into TestTable in Teradata.In this case the source file name is 'Caliente' which is entered manually in the insert statement below.There are many source files like these from which i need to extract data from and put the corresponding source file name in Stream column. So as first step i would like to pass the source file name as parameter to mload while calling the file that contains the script.INSERT INTO "TestTable " ( Dance_ID, Delete_Flag, Cat_ID, Non_Music, Mark_Status, Run_Time, Daypart, Add_Date, Add_Date_DATE, Add_Date_TIME, Del_Date, Del_Date_DATE, Del_Date_TIME, Cat_Plays, Lib_Plays, Old_Skips, Old_Fails, Perf_Count, Rot_Weight, Max_Daily, Old_Start, Old_Kill, Kill_Plays, Simulcast, Tag_Along, Packet_ID, Audio_Ptr, Master_Ptr, Edit_Date, Edit_Date_DATE, Edit_Date_TIME, Move_Date, Move_Date_DATE, Move_Date_TIME, Move_Cat, Start_Hour, Start_Hour_DATE, Start_Hour_TIME, Kill_Hour, Kill_Hour_DATE, Kill_Hour_TIME, Stream ) VALUES ( :Dance_ID, :Delete_Flag, :Cat_ID, :Non_Music, :Mark_Status, :Run_Time, :Daypart, :Add_Date, :Add_Date_DATE, :Add_Date_TIME, :Del_Date, :Del_Date_DATE, :Del_Date_TIME, :Cat_Plays, :Lib_Plays, :Old_Skips, :Old_Fails, :Perf_Count, :Rot_Weight, :Max_Daily, :Old_Start, :Old_Kill, :Kill_Plays, :Simulcast, :Tag_Along, :Packet_ID, :Audio_Ptr, :Master_Ptr, :Edit_Date, :Edit_Date_DATE, :Edit_Date_TIME, :Move_Date, :Move_Date_DATE, :Move_Date_TIME, :Move_Cat, :Start_Hour, :Start_Hour_DATE, :Start_Hour_TIME, :Kill_Hour, :Kill_Hour_DATE, :Kill_Hour_TIME, 'Caliente' ) ;The other string parameter should be used to specify the source file name in import infile statement. In this case:(see the code below)instead of C:\TD_ETL\ACEESS_ FILES_TEST\TestTable _Caliente.amj i should have C:\TD_ETL\ACEESS_ FILES_TEST\SOURCEFILENAME.amj where SOURCEFILENAME is other parameter passed in the mload. .IMPORT INFILE "C:\TD_ETL\ACEESS_ FILES_TEST\TestTable_Caliente.amj" AXSMOD Oledb_Axsmod 'noprompt' LAYOUT Layout1 APPLY LabelA ;Thanks in advance..

When editing any 'Event Combination' belonging to either a SysCon or an OpEnv, it is mandatory to choose ANY ONE of "Chagne OpEnv/SysCon" OR "Queue Table", otherwise the event combination cannot be created.

It makes sense that we have to provide the "Change OpEnv/SysCon" field for a combination because without it the event combination remains dangling.
But the other option of selecting "Queue Table" is also allowed when an event combination is created. No OpEnv/SysCon will be effected by this so does it make sense as a valid option?