Our Services

Get 15% Discount on your First Order

[rank_math_breadcrumb]

CO Data 3

DECISION TREES for Risk Assessment

One of the great advantages of decision trees is their 
interpretability. The rules learnt for classification are easy for a person to follow, unlike the opaque “black box” of many other methods, such as neural networks. We demonstrate the utility of this using a 
German credit data set. You can read 
a description of this dataset at the UCI site. The task is to predict whether a loan approval is good or bad credit risk based on 20 attributes. We’ve simplified the data set somewhat, particularly making attribute names and values more meaningful.

1. Download the credit_Dataset.arff dataset and load it to Weka.

2. (5 Points) When presented with a dataset, it is usually a good idea to visualise it first. Go to the 
Visualise tab. Click on any of the scatter plots to open a new window which shows the scatter plot for two selected attributes. Try visualising a scatter plot of 
age and 
duration. Do you notice anything unusual? You can click on any data point to display all it’s values.

3. (5 Points) In the previous point you should have found a data point, which seems to be corrupted, as some of its values are nonsensical. Even a single point like this can significantly affect the performance of a classifier. How do you think it would affect Decision trees? A good way to check this is to test the performance of each classifier before and after removing this datapoint.

4. (10 Points) To remove this instance from the dataset we will use a filter. We want to remove all instances, where the age of an applicant is lower than 0 years, as this suggests that the instance is corrupted. In the 
Preprocess tab click on 
Choose in the Filter pane. Select 
filters > unsupervised > instance > RemoveWithValues. Click on the text of this filter to change the parameters. Set the attribute index to 13 (Age) and set the split point at 0. Click 
Ok to set the parameters and 
Apply to apply the filter to the data. Visualise the data again to verify that the invalid data point was removed.

5. (20 Points) On the 
Classify tab, select the 
Percentage split test option and change its value to 90%. This way, we will train the classifiers using 90% of the training data and evaluate their performance on the remaining 10%. First, train a decision tree classifier with default options. Select 
classifiers > trees > J48 and click 
Start
J48 is the Weka implementation of the 
C4.5 algorithm, which uses the normalized information gain criterion to build a decision tree for classification.

6. (20 Points) After training the classifier, the full decision tree is output for your perusal; you may need to scroll up for this. The tree may also be viewed in graphical form by right-clicking in the 
Result list and selecting 
Visualize tree; unfortunately this format is very cluttered for large trees. Such a tree accentuates one of the strengths of decision tree algorithms: they produce classifiers which are understandable to humans. This can be an important asset in real life applications (people are seldom prepared to do what a computer program tells them if there is no clear explanation). Observe the output of the classifier and try to answer the following questions:

· How would you assess the performance of the classifier? Is the Percentage of 
Correctly Classified Instances a sufficient measure in this case? Why? 
Hint: check the number of good and bad cases in the test sample, using the confusion matrix. Each column of the matrix represents the instances in a predicted class, while each row represents the instances in an actual class. For example let us define an experiment from P positive instances and N negative instances. The four outcomes can be formulated in a 2 by 2 contingency table or confusion matrix. One benefit of a confusion matrix is that it is easy to see if the system is confusing two classes (i.e. commonly mislabeling one as another).

· Looking at the decision tree itself, are the rules it applies sensible? Are there any branches which appear absurd? At what depth of the tree? What does this suggest?

Hint: Check the rules applied after following the paths: (a) 
CheckingAccount = <0, Foreign = yes, Duration >11, Job = skilled, OtherDebtors = none, Duration <= 30 and (b) 
CheckingAccount = <0, Foreign = yes, Duration >11, Job = unskilled.

· How does the decision tree deal with classification in the case where there are zero instances in the training set corresponding to that particular path in the tree (e.g. those leaf nodes that have (0:0))?

7. (20 Points) Now, explore the effect of the 
confidenceFactor option. You can find this by clicking on the Classifer name (to the right of the 
Choose button on the Classify tab). On the 
Classifier options window, click on the 
More button to find out what the confidence factor controls. Try the values 0.1, 0.2, 0.3 and 0.5. What is the performance of the classifier at each case? Did you expect this given your observations in the previous questions? Why do you think this happens?

8. (20 Points) Suppose that it is worse to classify a customer as good when they are bad, than it is to classify a customer as bad when they are good. Which value would you pick for the confidence factor? Which performance measure would you base your decision on?

9. (Bonus: 20 Points)Finally we will create a 
random decision forest and compare the performance of this classifier to that of the decision tree and the decision stump. The random decision forest is an ensemble classifier that consists of many decision trees and outputs the class that is the mode of the class’s output by individual trees. Again set the test option 
Percentage split to 90%. Select 
classifiers > trees > RandomForest and hit 
Start. Again, observe the output. How high can you get the performance of the classifier by changing the number of trees (numTrees) parameter? How does the random decision forest compare performance wise to the decision tree and decision stump?

Deliverable:

· Your report including the screenshots of your implementation for each section and the results.

Share This Post

Email
WhatsApp
Facebook
Twitter
LinkedIn
Pinterest
Reddit

Order a Similar Paper and get 15% Discount on your First Order

Related Questions

Week 10

Read attachment for details  Week 8 Feedback Overall Feedback Theory is one of the most difficult concept to grasp.  Your study must be based on a theory and align with what you are attempting to explore and what you are trying to answer based on previous gaps in research. Well

hw2

This problem exercises the basic concepts of game playing, using tic-tac-toe as an example.  We define Xn as the number of rows, columns, or diagonals with exactly n X’s and no O’s.  Similarly, On is the number of rows, columns, or diagonals with exactly n O’s.  The utility function assigns

Computer Science Homework 2

Homework 2. Question 1. Decision Tree Classifier [10 Points] Data: The zip file “ hw2.q1.data.zip” contains 3 CSV files: · “ hw2.q1.train.csv” contains 10,000 rows and 26 columns. The first column ‘ y’ is the output variable with 2 classes: 0, 1. The remaining 25 columns contain input features: x_1,

Incident Response

Please follow the PDF WGU Performance Assessment  Please create report attach is the doc file to use  also included are the lab results with screen shots of answer  -Create “Incident Reporting Template” with file attach -Use screenshot evidence document, in .docx format, generated by the virtual lab for guidance and

Week 8

Read attachment for details Theoretical Framework – Week 8 Hide Assignment Information Turnitin™ Turnitin™ enabledThis assignment will be submitted to Turnitin™. Instructions This week you will submit your theoretical framework. The following description for this section of your thesis is from the End of Program Manual (EOP): Theoretical Framework/Approach: The

In Basketball Stars, a player attempts 25 shots in one game.

  In  basketball stars, a player attempts 25 shots in one game. a) If 15 shots are successful, what is the player’s shooting percentage? b) The next game, the player makes 18 out of 30 shots. Compare the two shooting percentages. c) What is the overall shooting percentage across both

problem

Research problems due 9/18 Please follow the instructions carefully for your research problem. Your argument and research input will significantly impact your grade. Ensure that you check for AI-generated content and plagiarism before submitting your paper. AI-generated content should not exceed 10%, and content from external sources should be limited

co task 6

Topic-bitcoin Task 6 Objective: To apply systems thinking principles to analyze a blockchain network and understand its key components, interactions, and dynamics. Assignment Tasks: Select a Blockchain Network: Choose a specific blockchain network or cryptocurrency project to analyze. You can select well-known networks like Bitcoin, Ethereum, or any other blockchain

CO Task 5

In this homework, we explore Naïve Bayes, K-Nearest Neighbors, and Support Vector Machine models. 1) (50 points) Use “credit_Dataset.arff” dataset and apply the Naïve Bayes, K-Nearest Neighbors, and Support Vector Machine technique using the WEKA tool in 2 different settings, including: a. 10 fold-cross validation. b. 80% training. Write a

PhD thesis

I need a comprehensive PhD thesis developed on the topic of “Emotion-Aware Artificial Intelligence and Sustainable Consumer Behavior: A Neuro-AI Marketing Framework for Continuous Green Consumption.”

Co project

· Comprehensive Literature Review: Require a more comprehensive survey of existing approaches. · Comparative Study: Expect more detailed benchmarking of at least 8 to 10 machine learning models. · Additional Experiments: · Conduct feature selection or dimensionality reduction as an extra step. · Explore ensemble methods or advanced techniques beyond

AI

Did AI take place the Software Engineers, HR consultants and Data Entry Jobs?

Data visualization 4 part 2

Follow the attached instructions to complete this work. Unit 4 Assignment Directions: Time Series In this assignment, you will perform a time series analysis in Tableau. · Choose a dataset to analyze based on the requirements provided.   · Once you’ve selected your time series, build a forecast to predict future

Computer Science CG Assignment 8 presentation

Follow the attach instruction to complete this work. Note: Make sure it aligns with Rubric Unit 8 Assignment 2 Directions: Final Presentation Purpose With this presentation, you will gain valuable experience demonstrating your expertise in cybersecurity governance by presenting as a CISO to a hypothetical professional audience.  Directions Begin by incorporating

Computer Science CG assignment 8

Follow the attached assignment to complete the work. Note: Follow Rubric Unit 8 Assignment 1 Directions: Presentation Rehearsal Purpose The rehearsal is your first run-through of your talk. Use the opportunity to de-bug any technical issues with lighting, positioning, and recording. You will not be graded on technical or artistic

Computer Science CG assignment 7 Outline

 Follow the attached document to complete this work Unit 7 Assignment 1 Directions: Professional Presentation Outline Purpose This assignment allows you time to review your research from previous units and organize your thoughts in an outline format. Plan on changing your paper and presentation based on feedback on this outline.  Directions

Computer Science CG assignment 6 ,

Follow the attached direction to complete this work. Note: Make sure it Aligns with Rubric Unit 6 Assignment 2 Directions: Timothy Brown vs. the SEC Purpose The Securities and Exchanges Commission (SEC) is a key US federal agency that regulates financial reporting. In this paper, you will explore how the