所有quiz和assignment链接:
Coursera | Applied Machine Learning in Python(University of Michigan)| Quiz
Coursera | Applied Machine Learning in Python(University of Michigan)| Assignment1
Coursera | Applied Machine Learning in Python(University of Michigan)| Assignment2
Coursera | Applied Machine Learning in Python(University of Michigan)| Assignment3
Coursera | Applied Machine Learning in Python(University of Michigan)| Assignment4
You are currently looking at version 1.1 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource.
Assignment 1 - Introduction to Machine Learning
For this assignment, you will be using the Breast Cancer Wisconsin (Diagnostic) Database to create a classifier that can help diagnose patients. First, read through the description of the dataset (below).
1 | import numpy as np |
The object returned by load_breast_cancer()
is a scikit-learn Bunch object, which is similar to a dictionary.
1 | cancer.keys() |
dict_keys(['data', 'target', 'frame', 'target_names', 'DESCR', 'feature_names', 'filename'])
Question 0 (Example)
How many features does the breast cancer dataset have?
This function should return an integer.
1 | # You should write your whole answer within the function provided. The autograder will call |
30
Question 1
Scikit-learn works with lists, numpy arrays, scipy-sparse matrices, and pandas DataFrames, so converting the dataset to a DataFrame is not necessary for training this model. Using a DataFrame does however help make many things easier such as munging data, so let’s practice creating a classifier with a pandas DataFrame.
Convert the sklearn.dataset cancer
to a DataFrame.
*This function should return a (569, 31)
DataFrame with *
*columns = *
['mean radius', 'mean texture', 'mean perimeter', 'mean area',
'mean smoothness', 'mean compactness', 'mean concavity',
'mean concave points', 'mean symmetry', 'mean fractal dimension',
'radius error', 'texture error', 'perimeter error', 'area error',
'smoothness error', 'compactness error', 'concavity error',
'concave points error', 'symmetry error', 'fractal dimension error',
'worst radius', 'worst texture', 'worst perimeter', 'worst area',
'worst smoothness', 'worst compactness', 'worst concavity',
'worst concave points', 'worst symmetry', 'worst fractal dimension',
'target']
*and index = *
RangeIndex(start=0, stop=569, step=1)
1 | def answer_one(): |
mean radius | mean texture | mean perimeter | mean area | mean smoothness | mean compactness | mean concavity | mean concave points | mean symmetry | mean fractal dimension | ... | worst texture | worst perimeter | worst area | worst smoothness | worst compactness | worst concavity | worst concave points | worst symmetry | worst fractal dimension | target | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 17.99 | 10.38 | 122.80 | 1001.0 | 0.11840 | 0.27760 | 0.30010 | 0.14710 | 0.2419 | 0.07871 | ... | 17.33 | 184.60 | 2019.0 | 0.16220 | 0.66560 | 0.7119 | 0.2654 | 0.4601 | 0.11890 | 0 |
1 | 20.57 | 17.77 | 132.90 | 1326.0 | 0.08474 | 0.07864 | 0.08690 | 0.07017 | 0.1812 | 0.05667 | ... | 23.41 | 158.80 | 1956.0 | 0.12380 | 0.18660 | 0.2416 | 0.1860 | 0.2750 | 0.08902 | 0 |
2 | 19.69 | 21.25 | 130.00 | 1203.0 | 0.10960 | 0.15990 | 0.19740 | 0.12790 | 0.2069 | 0.05999 | ... | 25.53 | 152.50 | 1709.0 | 0.14440 | 0.42450 | 0.4504 | 0.2430 | 0.3613 | 0.08758 | 0 |
3 | 11.42 | 20.38 | 77.58 | 386.1 | 0.14250 | 0.28390 | 0.24140 | 0.10520 | 0.2597 | 0.09744 | ... | 26.50 | 98.87 | 567.7 | 0.20980 | 0.86630 | 0.6869 | 0.2575 | 0.6638 | 0.17300 | 0 |
4 | 20.29 | 14.34 | 135.10 | 1297.0 | 0.10030 | 0.13280 | 0.19800 | 0.10430 | 0.1809 | 0.05883 | ... | 16.67 | 152.20 | 1575.0 | 0.13740 | 0.20500 | 0.4000 | 0.1625 | 0.2364 | 0.07678 | 0 |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
564 | 21.56 | 22.39 | 142.00 | 1479.0 | 0.11100 | 0.11590 | 0.24390 | 0.13890 | 0.1726 | 0.05623 | ... | 26.40 | 166.10 | 2027.0 | 0.14100 | 0.21130 | 0.4107 | 0.2216 | 0.2060 | 0.07115 | 0 |
565 | 20.13 | 28.25 | 131.20 | 1261.0 | 0.09780 | 0.10340 | 0.14400 | 0.09791 | 0.1752 | 0.05533 | ... | 38.25 | 155.00 | 1731.0 | 0.11660 | 0.19220 | 0.3215 | 0.1628 | 0.2572 | 0.06637 | 0 |
566 | 16.60 | 28.08 | 108.30 | 858.1 | 0.08455 | 0.10230 | 0.09251 | 0.05302 | 0.1590 | 0.05648 | ... | 34.12 | 126.70 | 1124.0 | 0.11390 | 0.30940 | 0.3403 | 0.1418 | 0.2218 | 0.07820 | 0 |
567 | 20.60 | 29.33 | 140.10 | 1265.0 | 0.11780 | 0.27700 | 0.35140 | 0.15200 | 0.2397 | 0.07016 | ... | 39.42 | 184.60 | 1821.0 | 0.16500 | 0.86810 | 0.9387 | 0.2650 | 0.4087 | 0.12400 | 0 |
568 | 7.76 | 24.54 | 47.92 | 181.0 | 0.05263 | 0.04362 | 0.00000 | 0.00000 | 0.1587 | 0.05884 | ... | 30.37 | 59.16 | 268.6 | 0.08996 | 0.06444 | 0.0000 | 0.0000 | 0.2871 | 0.07039 | 1 |
569 rows × 31 columns
Question 2
What is the class distribution? (i.e. how many instances of malignant
(encoded 0) and how many benign
(encoded 1)?)
This function should return a Series named target
of length 2 with integer values and index = ['malignant', 'benign']
1 | def answer_two(): |
malignant 212
benign 357
dtype: int64
Question 3
Split the DataFrame into X
(the data) and y
(the labels).
This function should return a tuple of length 2: (X, y)
, where
X
has shape(569, 30)
y
has shape(569,)
.
1 | def answer_three(): |
1 | X, y = answer_three() |
(569, 30)
(569,)
Question 4
Using train_test_split
, split X
and y
into training and test sets (X_train, X_test, y_train, and y_test)
.
Set the random number generator state to 0 using random_state=0
to make sure your results match the autograder!
This function should return a tuple of length 4: (X_train, X_test, y_train, y_test)
, where
X_train
has shape(426, 30)
X_test
has shape(143, 30)
y_train
has shape(426,)
y_test
has shape(143,)
1 | from sklearn.model_selection import train_test_split |
1 | X_train, X_test, y_train, y_test=answer_four() |
(426, 30)
(143, 30)
(426,)
(143,)
Question 5
Using KNeighborsClassifier, fit a k-nearest neighbors (knn) classifier with X_train
, y_train
and using one nearest neighbor (n_neighbors = 1
).
*This function should return a * sklearn.neighbors.classification.KNeighborsClassifier
.
1 | from sklearn.neighbors import KNeighborsClassifier |
Question 6
Using your knn classifier, predict the class label using the mean value for each feature.
Hint: You can use cancerdf.mean()[:-1].values.reshape(1, -1)
which gets the mean value for each feature, ignores the target column, and reshapes the data from 1 dimension to 2 (necessary for the precict method of KNeighborsClassifier).
This function should return a numpy array either array([ 0.])
or array([ 1.])
1 | def answer_six(): |
1 | answer_six() |
array([1])
Question 7
Using your knn classifier, predict the class labels for the test set X_test
.
This function should return a numpy array with shape (143,)
and values either 0.0
or 1.0
.
1 | def answer_seven(): |
1 | predict_X_test=answer_seven() |
(143,)
{0, 1}
Question 8
Find the score (mean accuracy) of your knn classifier using X_test
and y_test
.
This function should return a float between 0 and 1
1 | def answer_eight(): |
1 | answer_eight() |
0.916083916083916
Optional plot
Try using the plotting function below to visualize the differet predicition scores between training and test sets, as well as malignant and benign cells.
1 | def accuracy_plot(): |
1 | # Uncomment the plotting function to see the visualization, |