If I told you there was a way you might be able to speed up parsing time for your queries, would you be interested?  

In Teradata Database 14.10.02 there is a new capability that allows you to expedite express requests, and I’d like to explain how that works, describe when it can help you, and make some suggestions about how you can use it to get the best performance you can from parsing when the system is under stress.  But first a little background.

What is an express request?

When data dictionary information is needed by modules in the parsing engine (PE), and it cannot be found in the data dictionary cache on that PE, an express request is issued.  That stream-lined request goes directly to the AMPs and attempts to find that data dictionary information.  Because the database code itself is issuing express requests (rather than an end user), and the database can trust itself to do the right thing, these very short requests are allowed to bypass the resolver, security, parsing and optimization modules and are sent directly to the AMPs.

Most express requests are single-AMP requests that go to one AMP.  If there is lock on the row hash of the dictionary row they are trying to access, an express request will be resent to that AMP with an access lock applied.   If data from the data dictionary is accessed using an access lock, that data is not cached as it is the result of a dirty read and that row or rows may be in the process of undergoing change.

Several different modules in the parsing engine can submit express requests.  The figure below lists some of the dictionary information that express requests access, and which modules issue the requests. Even a simple query may require 30 or more express requests to be issued, and they will be issued serially.  Things like the number of database objects referenced in the SQL, the number of statistics that the optimizer looks for, and the complexity of access rights can influence the amount of separate requests for data that will required.  


Assessing parsing time spent on express requests

Starting in Teradata Database 14.0 you can see the wall clock time that was spent in processing express requests. Usually this number is close to zero.  This data is in a new column in DBQLogTbl named ParserExpReq.   Below is a sample of columns from several rows of DBQLogTbl output showing ParserExpReq, intentionally selected to show variation.  The unit reported in ParserExpReq is seconds.

































In many cases ParserExpReq will be NULL.   You will see a NULL when no express requests were issued because all the data was found in the data dictionary cache on the parsing engine.  Zero means that the wall clock time for express requests was less than 0.01 seconds.  99.9% of theDBQLogTbl rows from my shared test system showed ParserExpReq values of either zero or NULL.  I would expect that to be the same on your platform.   But as you can see from the data above, taken from that same system, there were occasional times when ParserExpReq was reporting some number of seconds (close to half a minute in the worst case above), even when the CPU time for parsing was very low. 

ParserExpReq reports wall clock time for the execution of all express requests combined on behalf of a query, and will not correlate directly to ParserCPUTime.  The usual reason for ParserExpReq to be a higher number of seconds is that one or more of the express requests were blocked once they got to the AMP.   This could happen if the AMP has exhausted AMP worker tasks.

What does expediting a request do?

Expediting a request marks it for special performance advantages.  In SLES11 and current SLES10 releases, all queries within a tactical workload are automatically expedited.  As of 14.10.02 you have the capability of expediting express requests as well.  

Here’s why that might make a difference for you.  When a request is expedited it is able to use special reserve pools of AMP worker tasks (AWTs), intended for tactical queries only.  If there is a shortage of AWTs on your platform, use of these reserve pools can speed up the elapsed time of a request, as the request no longer has to wait for another request to complete and free up an AWT so that it can begin to execute.

See this blog posting on reserving AMP worker tasks for more information on how expedited requests take advantage of reserve pools:


In addition to being given access to special reserve pools of AWTs, expedited requests are given other small internal boosts that are coded into the database.  While probably not noticeable in most cases, these slight performance advantages can contribute to completing work more quickly, especially on a platform with a high degree of contention.

The standard way that express requests are assigned to AMP worker tasks

Prior to taking advantage of this enhancement in 14.10.02, express requests were sent to the AMP in a message classified as a Work01 work type message.  Message work types are used to indicate the importance of the work that is contained in the message.  Work01 is used for spawned work on behalf of a user-initiated query (such as the receiver task during row redistribution).  It is a step up from new work (which runs in Work00).  

If there is a delay in getting an AWT, Work01 messages queue up in the message queue ahead of Work00 messages (new work sent from the dispatcher), but behind all the other work types.   If there are no AWTs available in the unassigned AWT pool at the time the message arrives, and the three reserves for Work01 are in-use, the message will wait on the queue.  This can increase the time for an express request to complete.  


How express requests are assigned to AMP worker tasks with this enhancement

If you take advantage of this enhancement, then messages representing express requests that arrive on the AMPs may be able to use the Work09 work type and reserve pool.  Work09 is a more elevated work type and is the work type assigned to spawned work from an expedited request.   Use of Work09 for express requests only happens, however, if there are AMP worker tasks reserved for the Work09 reserve pool.  If there are no reserves in Work09, then Work01 will continue to be used.

For more information on work types and reserve pools read this posting:


The important point in all of this is that if you are often, or even occasionally, out of AWTs, then making sure your express requests are not going to be impacted when that condition arises could provide better query performance for parsing activities during stressful times.  

Steps you have to take

The default behavior for express requests will remain the same when you upgrade to 14.10.02 or 15.0.   In order to expedite express requests you will need to put in a request to the support center or your account team asking them to change an internal DBS Control parameter called:  EnableExpediteExp.

The EnableExpediteExp setting has three possible settings:

0 = Use current behavior, all express requests go to Work01 (the default)

1 = Parser express requests will be expedited and use Work09 for all requests if AWTs have been reserved

2 = Parser express requests will be expedited and use Work09 only if the current workload is expedited by workload management and AWTs have been reserved

If you set EnableExpediteExp = 1, then all express request for all queries will be expedited, even when the request undergoing parsing is running in a workload that itself is not expedited.   If you set EnableExpediteExp = 2, then only express requests issued on behalf of an expedited workload will be expedited. 

Marking EnableExpediteExp as 1 or 2 is going to provide a benefit primarily in situations where there is some level of AWT exhaustion and where an AMP worker task reserve pool for tactical work has been setup. You can look at ParserExpReq in DBQL to check if you are experiencing longer parsing times due to express request delays.   If you are, have a conversation with the support center about whether this is the right change for you.


rasikatyagi 1 comment Joined 06/14
16 Jul 2014

Hi All,
We are using Teradata Studio Express Version:
When we execute a single statement, it pops up message "Result set contains at or over 2,000 rows.Cancel at 2,000 rows in accordance with the settings?" Is there some setting by which it can tell us 2000 should be cancelled out of how many rows, i.e. total rows being selected in the message box itself?

carrie 474 comments Joined 04/08
16 Jul 2014

I have never used Teradata Studio Express.  I think you may have posted this question on the wrong blog.
Sorry I cannot help you.    You could try posting your question on Teradata Forum.
Thanks, -Carrie

Vinodraj 1 comment Joined 01/13
16 Jul 2014

Hi Rasika,
'Max Display row Count' is set to 2000 by default in preference.
You can increase your limit accordingly. Check out 'Max Display row Count' variable under following location
Preference -> Teradata Datatools Preference -> Result set Viewer Preference


SarathyG 1 comment Joined 09/06
15 Aug 2014

Hi Carrie,
In a senario, where we have set EnableExpediteExp = 1 and AWTs have been reserved, and we have many workloads defined under Tactical, then my understanding is that, Tactical queries are likely to be highly impacted due to high request rate for work09, as tasks from all type of workloads will compete for work09 resulting in flowcontrol ?? 
For EnableExpediteExp = 2, am able to understand the clear benefit as its extra perf booster for tactical queries(alone) in terms of parsing.
What will be the ideal senario for using EnableExpediteExp = 1 ? 

- Sarathy G

carrie 474 comments Joined 04/08
18 Aug 2014

Although it is possible,  I would not expect tactical queries to be impacted much, if at all, from competition with express requests.    Express requests for the most part are extremely fast (that is why they are named express requests) unless they have to wait for an AMP worker task.  Most of them go to one AMP, not all-AMPs.     In addition, if your tactical applications are well-tuned, primarily single-AMP requests, then they will mostly use work08 anyway, not work09, and there will be no conflict at all.
The only way to know if there is an impact like you describe is to monitor the SAWT table for inuse counts and max inuse counts for work09, before and then after you change the EnableExpediteExp setting.  If max in use counts for work09 are higher than your reserve number in either case, increase your reserve number.  
Flow control on work09 is pretty unusual.  Even if a work09 message cannot get an AWT, that work09 message will be placed in the message queue ahead of work08, work00, work01 and other lower work types, and will get off the queue sooner than those others, so flow control is very unlikely. Most commonly it is only workone messages that experience flow control.
The benefit for expedite express requests for all requests is that it adds a small boost in parsing times for all queries, which is often desirable.  However, option 2, which only provides that benefit to already-expedited requests, is a fine choice as well.  It's up to you.   
Thanks, -Carrie

danR 3 comments Joined 01/13
02 Sep 2014

Hi Carrie:
Our company is starting the migration from 13.10 to 14.10.  With the new 14.0 features, including 'columnar' partitioning, ...,  what should we look out for as possible 'hickup's - in the move.
thank you


carrie 474 comments Joined 04/08
04 Sep 2014

That's a question that is better addressed to your Teradata account team.  Or you might want to ask that question on one of the Teradata Q&A forums and get feedback from others who have been through the process.  I don't tend to get involved in the onsite migrations in any detailed way.
One thing to realize is that statitics collection functionality has been enhanced in several ways in 14.10.   You can get up to speed on that by reading two orange books by RK Korlapati on Statistics Enhancements in 14.0 and 14.10, and by checking out these postings on my blog at Dev X:
New opportunities for statistics collection in Teradata 14.0
Statistics Threshold Functionality 101
Easing Into Using the New AutoStats Feature
Thanks, -Carrie

danR 3 comments Joined 01/13
06 Jan 2015

Hi Carrie:  Reading up on Columnar,  what is the difference between column partitioning and row-based multi-value compression.   Am assuming that both use a compression and de-compression algorythm. 
thank you


carrie 474 comments Joined 04/08
07 Jan 2015

Hi Dan,
Those two are not really very comparable, other than within the columnar functionality you may experience something called autocompression if you define a partition with a type of column.  But that's very different from the multivalue compression that has been around in Teradata for a very long time.
Autocompression happens automatically for column partitions and involves potentially several different compression techniques that are applied to the data contained within a container.  One or more of these techniques may be applied to the same container, and include (for example)  NULL compression, run-length compression and trim compression.  The characteristics of the data are key to how effective autocompression will be in a column partitioned table.   See Chapter 4 in the Columnar orange book for a thorough discussion of autocompression.  There are also descriptions of autocompression and how it works in the Teradata manuals if you don't have the orange book handy.
Multivalue compression only compresses specific values within specific columns of a table and must be pre-defined by the administrator.  There is nothing automatic about it.  You have to choose the columns and the values for that column you want compressed, and it only offers one type of compression.   The actual values are moved to the table header and presence bits are used in each row to point to the actual values in the table header that have been removed from the row.  A full description of multivalue compression can be found in Teradata Database Design manual and the Data Adminstration manual if you require more detail.
The only overlap between the two is that autocompression has one type of compression that is called local value-list compression, in which often-occurring column values are placed in a dictionary local to the container.  This could be seen to be similar to MVC, where the DBA picks the columns and compresses out some number of the values for that column.    Neither approach requires decompression as the values that have been compressed out are either locally handy or in the table header in memory.
Thanks, -Carrie

SmarakDas 25 comments Joined 02/12
12 Feb 2015

Hello Carrie,
Wonderful Article. I have a few questions concerning Expediting Express Requests:
(a) You have mentioned the DBQLogTbl Column "ParserExpTime" denoting the time in seconds for Express Requests. Do the Explain Plan shows any indication of this timing.
(b) In the 2nd paragraph under "Assessing parsing time spent on express requests", you have written "You will see a NULL when no express requests were issued because all the data was found in the data dictionary". It should be 'Data Dictionary Cache of the PE' instead of simply 'Data Dictionary'. Please rectify me if I am wrong. As Express Request deals with DD & DD Cache, this line is kind of confusing for me.
(c) When you say "If there are no AWTs available in the unassigned AWT pool at the time the message arrives, and the three reserves for Work01 are in-use, the message will wait on the queue", I wish to clarify 01 thing. If the Express Request isn't expedited, then the message will wait on the queue after checking the unassigned AWT pool only. The availability of the 03 AWTs reserved for Work01 shouldn't matter as these reserved pool is meant for Work01 message originating from queries of Tactical Workload or expedited requests only. Please rectify me if I am wrong.
Thanks In Advance,


carrie 474 comments Joined 04/08
12 Feb 2015

Thank you for catching the typo where I left off "cache" after "data dictionary" above.  I have made that correction.
Express requests first look in the data dictionary cache to see if the data they need resides there.  Only if the data is not in the dictionary cache will the express have to go the AMPs.
The explain does not carry any information about express request execution time.  The purpose of explain text is to show the query plan and the estimates that were used to produce that plan, not report on performance data.  That is why express request performance information is in DBQLogTbl.
As for question #3, if express requests are not expedited they will use the Work01 work type.  If there are no available AWTs in the unassigned pool they will look to see if there are any reserves is the reserve pool of 3 for the 01 work type.  The work01 reserve pool is used for express requests and also for unexpedited work that has been spawned from a work00 work type.   Queries that are tactical or that have been expedited will use the work08 work type and work08 will only have  a reserve pool if the DBA has defined one.   Expedited requests will not use the work01 reserve pool.
Thanks, -Carrie

Rahul 3 comments Joined 05/09
6 days ago

Thank you for the extremely informative blog.
What in your opinion should be considered a threshold value for ParserExpReq which will warrant the need for expediting express requests ?
Or, should we try to determine if Work01 AWT exhaustion may be causing a high value for ParserExpReq and then make a decision as to whether express requests need to be expedited ?
- Rahul.

carrie 474 comments Joined 04/08
4 days ago

I don't have a threshold value in mind when it comes to ParserExpReq.  You could compare it to ParserCPUTime.   Some requests are more complex and may have high ParserCPUTime as a results.   Those requests may also have higher ParserExpReq because of that.   Each request is different. 
The two conditions that I would suggest you pay attention to before considering expediting express requests are the following:
                - Is anyone complaining about long parse times, or have ParserExpReq times gotten decidedly  longer?
                -Are you exhausting AMP worker tasks at the time when long parsing times are being detected?
If you not exhausting AWTs, then there is little value in expediting express requests.
Thanks, -Carrie

You must sign in to leave a comment.