Join(SQL)
- An SQL join clause combines columns from one or more tables in a relational database. It creates a set that can be saved as a table or used as it is.
Implementation
- Much work in database-systems has aimed at efficient implementation of joins, because relational systems commonly call for joins, yet face difficulties in optimising their efficient execution.
- The problem arises because inner joins operate both commutatively and associatively. In practice, this means that the user merely supplies the list of tables for joining and the join conditions to use, and the database system has the task of determining the most efficient way to perform the operation.
- A query optimizer determines how to execute a query containing joins. A query optimizer has two basic freedoms:
- Join order: Beacuse it joins functions commutatively and associatively, the order in which the system joins tables does not change the final result set of the query. However, join-order could have an enormous impact on the cost of the join operation, so choosing the best join order becomes very important.
- Join method: Given two tables and a join condition, multiple algorithms can produce the result set of the join. Which algorithm runs most efficiently depends on the sizes of the input tables, the number of rows from each table that match the join condition, and the operations required by the rest of the query.
Join Algorithms
- Three fundamental algorithms for performing a join operation exist: nested loop join, sort-merge join and hash join.
Nested loop join
- A nested loop join is a navie algorithm that joins two sets by using two nested loops.
- Two relations R and S are joined as follows:
For each tuple r in R do For each tuple s in S do If r and s satisfy the join condition Then output the tuple <r,s>
-
This algorithm will involve nr*bs+ br block transfers and nr+br seeks, where br and bs are number of blocks in relations R and S respectively, and nr is the number of tuples in relation R.
-
The algorithm runs in I/Os, where |R| and |S| is the number of tuples contained in R and Srespectively and can easily be generalized to join any number of relations.
-
The block nested loop join algorithm is a generalization of the simple nested loops algorithm that takes advantage of additional memory to reduce the number of times that the S relation is scanned.
Sort-merge join
- The sort-merge join (also known as merge join) is a join algorithm and is used in the implementation of a relationaldatabase management system.
- The basic problem of a join algorithm is to find, for each distinct value of the join attribute, the set of tuples in each relation which display that value.
- The key idea of the sort-merge algorithm is to first sort the relations by the join attribute, so that interleaved linear scans will encounter these sets at the same time.
- In practice, the most expensive part of performing a sort-merge join is arranging for both inputs to the algorithm to be presented in sorted order.
- This can be achieved via an explicit sort operation (often an external sort), or by taking advantage of a pre-existing ordering in one or both of the join relations.
- The latter condition can occur because an input to the join might be produced by an index scan of a tree-based index, another merge join, or some other plan operator that happens to produce output sorted on an appropriate key.
- Time complexity: Let's say that we have two relations R and S and . R fits in Pr pages memory and S fits in Ps pages memory. So, in the worst case sort-merge join will run in I/Os. In the case that R and S are not ordered the worst case time cost will contain additional terms of sorting time: O(Pr+Ps+Prlog(Pr)+Pslog(Ps)), which equals O(Prlog(Pr)+Pslog(Ps)).
Pseudocode
- For simplicity, the algorithm is described in the case of an inner join of two relations on a single attribute. Generalization to other join types, more relations and more keys is straightforward.
-
function sortMerge(relation left, relation right, attribute a) var relation output var list left_sorted := sort(left, a) // Relation left sorted on attribute a var list right_sorted := sort(right, a) var attribute left_key, right_key var set left_subset, right_subset // These sets discarded except where join predicate is satisfied advance(left_subset, left_sorted, left_key, a) advance(right_subset, right_sorted, right_key, a) while not empty(left_subset) and not empty(right_subset) if left_key = right_key // Join predicate satisfied add cartesian product of left_subset and right_subset to output advance(left_subset, left_sorted, left_key, a) advance(right_subset, right_sorted, right_key, a) else if left_key < right_key advance(left_subset, left_sorted, left_key, a) else // left_key > right_key advance(right_subset, right_sorted, right_key, a) return output // Remove tuples from sorted to subset until the sorted[1].a value changes function advance(subset out, sorted inout, key out, a in) key := sorted[1].a subset := emptySet while not empty(sorted) and sorted[1].a = key insert sorted[1] into subset remove sorted[1]
Hash Join
- Hash join is similar to nested loop join but faster than nested loop join and hash join is used for equi join.
Classic hash join
- The classic hash join algorithm for an inner join of two relations proceeds as follows:
- Build phase: prepare a hash table for the smaller relation. The hash table entries consist of the join attribute and its row. (Because the hash table is accessed by applying a hash function to the join attribute, it will be much quicker to find a given join attribute's rows by using this table than by scanning the original relation.)
- Probe phase: scan the larger relation and find the relevant rows from the smaller relation by looking in the hash table.
- This algorithm is simple, but it requires that the smaller join relation fits into memory, which is sometimes not the case.
- A simple approach to handling oom proceeds as follows:
- For each tuple r in the build input R
- Add r to the in-memory hash table
- If the size of the hash table equals the maximum in-memory size:
- Scan the probe input S, and add matching join tuples to the output relation
- Reset the hash table, and continue scanning the build input R
- Do a final scan of the probe input S and add the resulting join tuples to the output relation
- For each tuple r in the build input R
Grace hash join
- A better approach is known as the "grace hash join", after the GRACE database machine for which it was first implemented.
- This algorithm avoids rescanning the entire S relation by:
- first partitioning both R and S via a hash function, and writing these partitions out to disk.
- then loads pairs of partitions into memory, builds a hash table for the smaller partitioned relation, and probes the other relation for matches with the current hash table. Because the partitions were formed by hashing on the join key, it must be the case that any join output tuples must belong to the same partition.
- It is possible that one or more of the partitions still does not fit into the available memory, in which case the algorithm is recursively applied: an additional orthogonal hash function is chosen to hash the large partition into sub-partitions, which are then processed as before. Since this is expensive, the algorithm tries to reduce the chance that it will occur by forming as reasonably smaller partitions as possible during the initial partitioning phase.
Hybrid hash join
- The hybrid hash join algorithm is a refinement of the grace hash join which takes advantage of more available memory.
- During the partitioning phase, the hybrid hash join uses the available memory for two purposes:
- To hold the current output buffer page for each of the k partitions.
- To hold an entire partition in-memory, known as "partition 0".
- Because partition 0 is never written to or read from disk, the hybrid hash join typically performs fewer I/O operations than the grace hash join.
Hash anti-join
- Hash joins can also be evaluated for an anti-join predicate (a predicate selecting values from one table when no related values are found in the other). Depending on the sizes of the tables, different algorithms can be applied:
Hash left anti-join
- Prepare a hash table for the NOT IN side of the join.
- Scan the other table, selecting any rows where the join attribute hashes to an empty entry in the hash table.
- This is more efficient when the NOT IN table is smaller than the FROM table
Hash right anti-join
- Prepare a hash table for the FROM side of the join.
- Scan the NOT IN table, removing the corresponding records from the hash table on each hash hit
- Return everything that left in the hash table
- This is more efficient when the NOT IN table is larger than the FROM table
Join Indexes
- Join indexes are database indexes that facilitate the processing of join queries in data warehouses.
- They are currently(2012) available in implementations by Oracle and Teradata.
- In the Teradata implementation, specified columns, aggregate functions on columns, or components of date columns from
- The Oracle implementation limits itself to using bitmap indexes.