Welcome! Teradata Developer Exchange is a community-oriented website that connects Teradata Associates with Customers and others interested in the related technologies. Think of it as a technical collaborative community for sharing ideas, asking questions, learning, and solving problems related to Teradata solutions and beyond. Sounds great doesn't it? So sign up and join the community now!

Expand All Subscribe to Teradata Developer Exchange - All content The Latest
New Teradata Unity Data Mover 15.00 Available

Announcing Teradata Unity Data Mover 15.00

We are pleased to announce the General Customer Availability (GCA) of Unity Data Mover 15.00. With release 15.00, Unity Data Mover is now certified with Teradata Database 15.00, and supports important database features such as JSON and foreign server definitions. Data Mover can now move foreign server definitions, as well as utilize QueryGrid/foreign server connections to move data from Hadoop to Teradata (if installed). Hadoop support has also been extended to support TDH 2.1 and 1.3.2.

Implementing a multiple input stream Teradata 15.0 Table Operator for K-means clustering

Background

This article is a follow on to article [1] which discussed implementing K-means using a Teradata release 14.10 table operator. The main contribution of this article is to discuss how to use the new Teradata 15.0 multiple input stream feature and a short discussion on a gcc compiler performance optimization.

Now You See It, Now You Can't - How to Use Encryption in Teradata Systems

Many industry regulations, standards, and policies mandate the use of strong encryption to meet various security requirements.

The Teradata JDBC Driver's COP Discovery, LCC, Logon, and more

The Teradata JDBC Driver Engineering team receives a lot of questions about what happens when a JDBC connection is created. Let's clarify the concepts Laddered Concurrent Connect (LCC) versus COP Discovery versus logon, and review what the Teradata JDBC Driver does to create a JDBC connection.

 

First, some definitions...

"COP" stands for Communications Processor, and is a term originating from Teradata's earliest database appliances in the 1980s. In modern usage, a "COP" is a Teradata Database node that is running a Teradata Database Gateway process.

Big Blocks: Usability Tips & Tradeoffs

What are big blocks? When does it make sense to use them? How do I get started?

New, Simplified Approach to Parsing Workload Selection

As of Teradata Database 14.10.05 and 15.00.03, a simplified approach to determining which workload will support session management and parsing has been adopted  This posting describes this more straightforward approach which will be used starting in these 14.10 and 15.0 releases, and going forward in all future releases.

An earlier posting titled:  “A Closer Look at How to Setup a Parsing-Only Workload”  explains how session and parsing workload assignment takes place prior to this more simplified approach.  If you are on an earlier release, please see that posting from December 2014.

Example Java UDF for Table Hash Calculations

There is a demand to have a functionality similar to the industry standard SHA256 hash function to condense RDBMs table content to a single value which is changing completely in case a single bit changes in multi million or billion row tables.

Graph Processing Inside an Analytic DBMS

Guest Author: Dr. Daniel Abadi, Yale University

Although the Bulk Synchronous Parallel (BSP) model for scalable parallel processing was invented by Leslie Valiant in the 1980s (and was cited as part of the reason for Valiant’s recent Turing award), it became a popular model for scalable processing of graph data in 2010 when Grzegorz Malewicz et. al. from Google published their seminal paper on Pregel in SIGMOD 2010 (http://dl.acm.org/citation.cfm?id=1807184). 

Identifying Used, Unused and Missing Statistics

New DBQL logging options USECOUNT and STATSUSAGE, introduced in Teradata Database 14.10, enable the logging of used and missing statistics.  The output of this logging can be utilized to find used, unused, and missing statistics globally (for all queries) or for just a subset of queries.

Just a Little Bit More

The theme of this blog is an examination of forces that would disrupt existing data warehouse implementations.  I categorize these as either long tail or black swan events.

Pages