Siddharth K Rao
Navneeth S Holla
Vignesh V Anweri
Decision Trees are one of the easiest and popular classification algorithms to understand and interpret. The goal of using a Decision Tree is to create a training model that can be used to predict the class or value of the target variable by learning simple decision rules inferred from prior data. The primary challenge in the decision tree implementation is to identify which attributes to consider as the root node at each level. Handling this is known as attribute selection. The ID3 algorithm builds decision trees using a top-down greedy search approach through the space of possible branches with no backtracking. It always makes the choice that seems to be the best at that moment. Attribute selection in the ID3 algorithm involves various steps such as computing entropy, information gain and selecting the most appropriate attribute as the root node.
In this assignment we are given functions that calculate all these important parameters that help in construction of a categorical variable decision tree.
Search Algorithms aim at navigating from a start state to a goal state by transitioning through intermediate states. It also consists of a state space which is a set of all possible states where you can be. There are many informed and uninformed search algorithms that exist and are very popular. A* search, Uniform Cost search (UCS), Depth First Search (DFS), Greedy Search to name a few.
In this assignment we are given a function called tri_traversal which implements three main algorithms which are A* search, Uniform Cost Search (UCS), Depth First Search (DFS).
To implement a neural network from scratch which will classify whether the patients will give birth to a Low Birth Weight child or not. Only NumPy has been used to implement this neural network and scikit learn for training and testing the data given.