# Difference Between Decision Tree And Decision Table

SHARE

## Decision Table

A decision table is a table that indicates conditions and actions in a simplified and orderly manner. By presenting logical alternative courses of action under various operating conditions, a decision table enables an individual to think through a problem and present its solution in compact notation.

Decision tables are used to model complicated logic. They can make it easy to see that all possible combinations of conditions have been considered and when conditions are not met, it is easy to see.

A decision table can also be described as cause-effect table and is the best way to deal with combination inputs with their associated outputs. It is an excellent tool to use in both testing and requirements management. Using decision tables it become easier for the requirements specialist to write requirements which cover all conditions.

In a decision table, the logic is well divided into conditions, actions (decisions) and rules for representing the various components that form the logical model. The general format of a decision table has four basic parts. They include:

• Action entry: It indicates the actions to be taken.
• Condition entry: It indicates conditions which are being met or answers the questions in the condition stub.
• Action stub: It lists statements that describe all actions that can be taken.
• Condition stub: it lists all conditions to be tested for factors necessary for taking a decision.

### What You Need To Know About Decision Table

• A decision table is a table of rows and columns separated into four quadrants.
• In a decision table, the inputs are listed in a column, with the outputs in the same column but below the inputs.
• Using decision tables make it possible to detect combinations of conditions that would otherwise not have been found and therefore not tested or developed.
• Decision tables should best be constructed during system design then they become useful to developers, testers and end-users.
• Decision table testing is a black box test design technique to determine the test scenarios for complex business logic.
• There are two types of decision tables, that is extended entry table and limited entry table. In extended entry table both the entry and stub section of any specific condition must be considered together if a condition is applicable to a given rule. In limited entry tables the conditions or actions required are contained within the appropriate stubs.
• To built decision tables, the analyst needs to determine the maximum size of the table; eliminate any impossible situations, inconsistencies or redundancies and simplify the table as much as possible.
• Decision tables can be and often embedded within computer programs and use to drive the logic of the program. A simple example might be a lookup table containing a range of possible input values and a function pointer to the section of code to process that input.
• Decision table shows conditions and actions in a simplified and orderly manner. By presenting logical alternative courses of action under various operating conditions, a decision table enables an individual to think through a problem and present its solution in compact notation.

• When the conditions are many then the decision table helps to visualize the outcomes of a situation.
• They are simple to understand and everyone can use this method to design the test scenarios and test cases.
• They are easy to draw.
• They provide more compact documentation.
• Decision tables can be changed easily according to the situation.
• Decision tables summarize all the outcomes of a situation and suggest suitable actions.
• Decision tables have a standard format.

• Decision tables cannot express the complete sequence of operations to solve a problem; it may be difficult for a programmer to translate a decision table directly into a computer program.
• Decision tables do not show the flow of logic for the solution to a given problem.
• When there are many alternatives, decision table cannot list them all.
• Decision tables only present a partial solution.

## Decision Tree

A decision tree is a decision support tool that uses a branching method to illustrate every possible outcome of a decision.  Decision trees can be drawn by hand or created with a graphics program or specialized software. A decision tree typically begins with a single node, which branches into possible outcomes. Each of those outcomes results to additional nodes, which branch off into other possibilities. This gives it a tree-like shape.

In decision tree (tree-like graph), the nodes representing the place where we pick an attribute and ask a question; edge represents the answers to the question and the leaves represent the actual output or class label. Decision trees visually demonstrate cause-effect relationships, providing a simplified view of a potentially complicated process.

While creating a decision tree, some of the basic assumptions include:

• At the beginning, the whole training set is considered as the root.
• Records are distributed recursively on the basis of attribute values.
• Order to placing attributes as root or internal node of the tree is done by using some statistical approach.
• Feature values are preferred to be categorical. If the values are continuous then they are discretized prior to building the model.

### What You Need To Know About Decision Tree

• Decision trees are a non-parametric supervised learning method used for both classification and regression tasks. It illustrates graphically all the possible alternatives, probabilities and outcomes and identifies the benefits of using decision analysis.
• The decision tree is typically read from top (root) to the bottom (leaves). A question is asked at each node (split point) and the response to that question determines which branch is followed next. The prediction is given by the label of a leaf.
• Decision trees can be drawn by hand or created with a graphics program or specialized software.
• The goal of decision tree is to create a model that predicts the value of a target variable by learning simple decision rules inferred from the data features.
• Decision trees are commonly used in operation research, specifically in decision analysis, to help identify a strategy most likely to reach a goal.
• A decision tree is considered optimal when it represents the most data with the fewest number of levels or questions.
• In decision tree, the sample is split into two or more homogenous sets based on most significant splitter/differentiator in input variables.
• There are two types of decision tree is based on the type of target variable at hand. The two types include categorical variable decision tree and continuous variable decision tree.
• In a decision tree, data type is never a constraint; it can handle both numerical and categorical variables.
• When dealing with categorical data with multiple levels, the information gain is biased in favor of the attribute with the most levels.
• Non-linear relationship between parameters does not affect tree performance.
• Decision trees have no assumptions about the space distribution and the classifier structure.

### Other Applications Of Decision Trees

• Manufacturing- Chemical material evaluation for manufacturing/production.
• Planning-Scheduling activities.
• Pharmacology-developing an analysis of drug efficacy.
• Molecular biology- analyzing amino acid sequence in the human genome project.
• Medicine- analysis of sudden infant death syndrome (SIDS).
• Production- process optimization in electrochemical machining.
• Biomedical engineering-identify features to be used in implantable devices.

• Decision trees provide a clear indication of which fields are most important for prediction or classification.
• The tree output is easy to read and interpret.
• They can be used as a baseline benchmark for other predictive techniques.
• Decision trees are able to generate understandable rules.
• Decision trees require relatively little effort from users for data preparation.
• Decision trees perform classification without requiring much computation.
• Decision trees can handle both numerical and categorical variables.
• Decision trees are not affected by outliers and missing values to a fair degree.
• A multitude of business problems can be analyzed and solved by decision trees.