0%

Coursera | Introduction to Data Science in Python(University of Michigan)| Assignment4

   u1s1,这门课的assignment还是有点难度的,特别是assigment4(哀怨),放给大家参考啦~
   有时间(需求)就把所有代码放到github上(好担心被河蟹啊)
   相关链接:
   Coursera | Introduction to Data Science in Python(University of Michigan)| Quiz
   Coursera | Introduction to Data Science in Python(University of Michigan)| Assignment1
   Coursera | Introduction to Data Science in Python(University of Michigan)| Assignment2
   Coursera | Introduction to Data Science in Python(University of Michigan)| Assignment3
   Coursera | Introduction to Data Science in Python(University of Michigan)| Assignment4
   CSDN链接:
   Coursera | Introduction to Data Science in Python(University of Michigan)| Quiz答案
   Coursera | Introduction to Data Science in Python(University of Michigan)| Assignment1
   Coursera | Introduction to Data Science in Python(University of Michigan)| Assignment2
   Coursera | Introduction to Data Science in Python(University of Michigan)| Assignment3
   Coursera | Introduction to Data Science in Python(University of Michigan)| Assignment4

Assignment 4

Description

In this assignment you must read in a file of metropolitan regions and associated sports teams from assets/wikipedia_data.html and answer some questions about each metropolitan region. Each of these regions may have one or more teams from the “Big 4”: NFL (football, in assets/nfl.csv), MLB (baseball, in assets/mlb.csv), NBA (basketball, in assets/nba.csvor NHL (hockey, in assets/nhl.csv). Please keep in mind that all questions are from the perspective of the metropolitan region, and that this file is the “source of authority” for the location of a given sports team. Thus teams which are commonly known by a different area (e.g. “Oakland Raiders”) need to be mapped into the metropolitan region given (e.g. San Francisco Bay Area). This will require some human data understanding outside of the data you’ve been given (e.g. you will have to hand-code some names, and might need to google to find out where teams are)!

For each sport I would like you to answer the question: what is the win/loss ratio’s correlation with the population of the city it is in? Win/Loss ratio refers to the number of wins over the number of wins plus the number of losses. Remember that to calculate the correlation with pearsonr, so you are going to send in two ordered lists of values, the populations from the wikipedia_data.html file and the win/loss ratio for a given sport in the same order. Average the win/loss ratios for those cities which have multiple teams of a single sport. Each sport is worth an equal amount in this assignment (20%*4=80%) of the grade for this assignment. You should only use data from year 2018 for your analysis – this is important!

Notes

  1. Do not include data about the MLS or CFL in any of the work you are doing, we’re only interested in the Big 4 in this assignment.
  2. I highly suggest that you first tackle the four correlation questions in order, as they are all similar and worth the majority of grades for this assignment. This is by design!
  3. There may be more teams than the assert statements test, remember to collapse multiple teams in one city into a single value!

Question 1

For this question, calculate the win/loss ratio’s correlation with the population of the city it is in for the NHL using 2018 data.

Code

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
import pandas as pd
import numpy as np
import scipy.stats as stats
import re

cities = pd.read_html("assets/wikipedia_data.html")[1]
cities = cities.iloc[:-1, [0, 3, 5, 6, 7, 8]]
cities.rename(columns={"Population (2016 est.)[8]": "Population"},
inplace=True)
cities['NFL'] = cities['NFL'].str.replace(r"\[.*\]", "")
cities['MLB'] = cities['MLB'].str.replace(r"\[.*\]", "")
cities['NBA'] = cities['NBA'].str.replace(r"\[.*\]", "")
cities['NHL'] = cities['NHL'].str.replace(r"\[.*\]", "")

Big4='NHL'

def nhl_correlation():
# YOUR CODE HERE
# raise NotImplementedError()

team = cities[Big4].str.extract('([A-Z]{0,2}[a-z0-9]*\ [A-Z]{0,2}[a-z0-9]*|[A-Z]{0,2}[a-z0-9]*)([A-Z]{0,2}[a-z0-9]*\ [A-Z]{0,2}[a-z0-9]*|[A-Z]{0,2}[a-z0-9]*)([A-Z]{0,2}[a-z0-9]*\ [A-Z]{0,2}[a-z0-9]*|[A-Z]{0,2}[a-z0-9]*)')
team['Metropolitan area']=cities['Metropolitan area']
team = pd.melt(team, id_vars=['Metropolitan area']).drop(columns=['variable']).replace("",np.nan).replace("—",np.nan).dropna().reset_index().rename(columns = {"value":"team"})
team=pd.merge(team,cities,how='left',on = 'Metropolitan area').iloc[:,1:4]
team = team.astype({'Metropolitan area': str, 'team': str, 'Population': int})
team['team']=team['team'].str.replace('[\w.]*\ ','')

_df=pd.read_csv("assets/"+str.lower(Big4)+".csv")
_df = _df[_df['year'] == 2018]
_df['team'] = _df['team'].str.replace(r'\*',"")
_df = _df[['team','W','L']]

dropList=[]
for i in range(_df.shape[0]):
row=_df.iloc[i]
if row['team']==row['W'] and row['L']==row['W']:
# print(row['team'],'will be dropped!')
dropList.append(i)
_df=_df.drop(dropList)

_df['team'] = _df['team'].str.replace('[\w.]* ','')
# _df['team'] = _df['team'].str.replace('.','')
_df = _df.astype({'team': str,'W': int, 'L': int})
_df['W/L%'] = _df['W']/(_df['W']+_df['L'])

merge=pd.merge(team,_df,'outer', on = 'team')
merge=merge.groupby('Metropolitan area').agg({'W/L%': np.nanmean, 'Population': np.nanmean})

population_by_region = merge['Population'] # pass in metropolitan area population from cities
win_loss_by_region = merge['W/L%'] # pass in win/loss ratio from _df in the same order as cities["Metropolitan area"]

assert len(population_by_region) == len(
win_loss_by_region), "Q1: Your lists must be the same length"
assert len(population_by_region
) == 28, "Q1: There should be 28 teams being analysed for NHL"

return stats.pearsonr(population_by_region, win_loss_by_region)[0]

结果

0.012486162921209923

Question 2

For this question, calculate the win/loss ratio’s correlation with the population of the city it is in for the NBA using 2018 data.

Code

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
import pandas as pd
import numpy as np
import scipy.stats as stats
import re

cities = pd.read_html("assets/wikipedia_data.html")[1]
cities = cities.iloc[:-1, [0, 3, 5, 6, 7, 8]]
cities.rename(columns={"Population (2016 est.)[8]": "Population"},
inplace=True)
cities['NFL'] = cities['NFL'].str.replace(r"\[.*\]", "")
cities['MLB'] = cities['MLB'].str.replace(r"\[.*\]", "")
cities['NBA'] = cities['NBA'].str.replace(r"\[.*\]", "")
cities['NHL'] = cities['NHL'].str.replace(r"\[.*\]", "")

Big4='NBA'

def nba_correlation():

team = cities[Big4].str.extract('([A-Z]{0,2}[a-z0-9]*\ [A-Z]{0,2}[a-z0-9]*|[A-Z]{0,2}[a-z0-9]*)([A-Z]{0,2}[a-z0-9]*\ [A-Z]{0,2}[a-z0-9]*|[A-Z]{0,2}[a-z0-9]*)([A-Z]{0,2}[a-z0-9]*\ [A-Z]{0,2}[a-z0-9]*|[A-Z]{0,2}[a-z0-9]*)')
team['Metropolitan area']=cities['Metropolitan area']
team = pd.melt(team, id_vars=['Metropolitan area']).drop(columns=['variable']).replace("",np.nan).replace("—",np.nan).dropna().reset_index().rename(columns = {"value":"team"})
team=pd.merge(team,cities,how='left',on = 'Metropolitan area').iloc[:,1:4]
team = team.astype({'Metropolitan area': str, 'team': str, 'Population': int})
team['team']=team['team'].str.replace('[\w.]*\ ','')

_df=pd.read_csv("assets/"+str.lower(Big4)+".csv")
_df = _df[_df['year'] == 2018]
_df['team'] = _df['team'].str.replace(r'[\*]',"")
_df['team'] = _df['team'].str.replace(r'\(\d*\)',"")
_df['team'] = _df['team'].str.replace(r'[\xa0]',"")
_df = _df[['team','W/L%']]
_df['team'] = _df['team'].str.replace('[\w.]* ','')
_df = _df.astype({'team': str,'W/L%': float})

merge=pd.merge(team,_df,'outer', on = 'team')
merge=merge.groupby('Metropolitan area').agg({'W/L%': np.nanmean, 'Population': np.nanmean})

population_by_region = merge['Population'] # pass in metropolitan area population from cities
win_loss_by_region = merge['W/L%'] # pass in win/loss ratio from _df in the same order as cities["Metropolitan area"]

assert len(population_by_region) == len(win_loss_by_region), "Q2: Your lists must be the same length"
assert len(population_by_region) == 28, "Q2: There should be 28 teams being analysed for NBA"

return stats.pearsonr(population_by_region, win_loss_by_region)[0]

结果

-0.1763635064218294

Question 3

For this question, calculate the win/loss ratio’s correlation with the population of the city it is in for the MLB using 2018 data.

Code

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
import pandas as pd
import numpy as np
import scipy.stats as stats
import re

cities = pd.read_html("assets/wikipedia_data.html")[1]
cities = cities.iloc[:-1, [0, 3, 5, 6, 7, 8]]
cities.rename(columns={"Population (2016 est.)[8]": "Population"},
inplace=True)
cities['NFL'] = cities['NFL'].str.replace(r"\[.*\]", "")
cities['MLB'] = cities['MLB'].str.replace(r"\[.*\]", "")
cities['NBA'] = cities['NBA'].str.replace(r"\[.*\]", "")
cities['NHL'] = cities['NHL'].str.replace(r"\[.*\]", "")

Big4='MLB'

def mlb_correlation():

team = cities[Big4].str.extract('([A-Z]{0,2}[a-z0-9]*\ [A-Z]{0,2}[a-z0-9]*|[A-Z]{0,2}[a-z0-9]*)([A-Z]{0,2}[a-z0-9]*\ [A-Z]{0,2}[a-z0-9]*|[A-Z]{0,2}[a-z0-9]*)([A-Z]{0,2}[a-z0-9]*\ [A-Z]{0,2}[a-z0-9]*|[A-Z]{0,2}[a-z0-9]*)')
team['Metropolitan area']=cities['Metropolitan area']
team = pd.melt(team, id_vars=['Metropolitan area']).drop(columns=['variable']).replace("",np.nan).replace("—",np.nan).dropna().reset_index().rename(columns = {"value":"team"})
team=pd.merge(team,cities,how='left',on = 'Metropolitan area').iloc[:,1:4]
team = team.astype({'Metropolitan area': str, 'team': str, 'Population': int})
team['team']=team['team'].str.replace('\ Sox','Sox')
team['team']=team['team'].str.replace('[\w.]*\ ','')

_df=pd.read_csv("assets/"+str.lower(Big4)+".csv")
_df = _df[_df['year'] == 2018]
_df['team'] = _df['team'].str.replace(r'[\*]',"")
_df['team'] = _df['team'].str.replace(r'\(\d*\)',"")
_df['team'] = _df['team'].str.replace(r'[\xa0]',"")
_df = _df[['team','W-L%']]
_df.rename(columns={"W-L%": "W/L%"},inplace=True)
_df['team']=_df['team'].str.replace('\ Sox','Sox')
_df['team'] = _df['team'].str.replace('[\w.]* ','')
_df = _df.astype({'team': str,'W/L%': float})

merge=pd.merge(team,_df,'outer', on = 'team')
merge=merge.groupby('Metropolitan area').agg({'W/L%': np.nanmean, 'Population': np.nanmean})

population_by_region = merge['Population'] # pass in metropolitan area population from cities
win_loss_by_region = merge['W/L%'] # pass in win/loss ratio from _df in the same order as cities["Metropolitan area"]

assert len(population_by_region) == len(win_loss_by_region), "Q3: Your lists must be the same length"
assert len(population_by_region) == 26, "Q3: There should be 26 teams being analysed for MLB"

return stats.pearsonr(population_by_region, win_loss_by_region)[0]

结果

0.15003737475409495

Question 4

For this question, calculate the win/loss ratio’s correlation with the population of the city it is in for the NFL using 2018 data.

Code

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
import pandas as pd
import numpy as np
import scipy.stats as stats
import re

cities = pd.read_html("assets/wikipedia_data.html")[1]
cities = cities.iloc[:-1, [0, 3, 5, 6, 7, 8]]
cities.rename(columns={"Population (2016 est.)[8]": "Population"},
inplace=True)
cities['NFL'] = cities['NFL'].str.replace(r"\[.*\]", "")
cities['MLB'] = cities['MLB'].str.replace(r"\[.*\]", "")
cities['NBA'] = cities['NBA'].str.replace(r"\[.*\]", "")
cities['NHL'] = cities['NHL'].str.replace(r"\[.*\]", "")

Big4='NFL'

def nfl_correlation():
team = cities[Big4].str.extract('([A-Z]{0,2}[a-z0-9]*\ [A-Z]{0,2}[a-z0-9]*|[A-Z]{0,2}[a-z0-9]*)([A-Z]{0,2}[a-z0-9]*\ [A-Z]{0,2}[a-z0-9]*|[A-Z]{0,2}[a-z0-9]*)([A-Z]{0,2}[a-z0-9]*\ [A-Z]{0,2}[a-z0-9]*|[A-Z]{0,2}[a-z0-9]*)')
team['Metropolitan area']=cities['Metropolitan area']
team = pd.melt(team, id_vars=['Metropolitan area']).drop(columns=['variable']).replace("",np.nan).replace("—",np.nan).dropna().reset_index().rename(columns = {"value":"team"})
team=pd.merge(team,cities,how='left',on = 'Metropolitan area').iloc[:,1:4]
team = team.astype({'Metropolitan area': str, 'team': str, 'Population': int})
team['team']=team['team'].str.replace('[\w.]*\ ','')

_df=pd.read_csv("assets/"+str.lower(Big4)+".csv")
_df = _df[_df['year'] == 2018]
_df['team'] = _df['team'].str.replace(r'[\*]',"")
_df['team'] = _df['team'].str.replace(r'\(\d*\)',"")
_df['team'] = _df['team'].str.replace(r'[\xa0]',"")
_df = _df[['team','W-L%']]
_df.rename(columns={"W-L%": "W/L%"},inplace=True)
dropList=[]
for i in range(_df.shape[0]):
row=_df.iloc[i]
if row['team']==row['W/L%']:
dropList.append(i)
_df=_df.drop(dropList)

_df['team'] = _df['team'].str.replace('[\w.]* ','')
_df['team'] = _df['team'].str.replace('+','')
_df = _df.astype({'team': str,'W/L%': float})

merge=pd.merge(team,_df,'outer', on = 'team')
merge=merge.groupby('Metropolitan area').agg({'W/L%': np.nanmean, 'Population': np.nanmean})

population_by_region = merge['Population'] # pass in metropolitan area population from cities
win_loss_by_region = merge['W/L%'] # pass in win/loss ratio from _df in the same order as cities["Metropolitan area"]

assert len(population_by_region) == len(win_loss_by_region), "Q4: Your lists must be the same length"
assert len(population_by_region) == 29, "Q4: There should be 29 teams being analysed for NFL"

return stats.pearsonr(population_by_region, win_loss_by_region)[0]

结果

0.004282141436393022

Question 5

In this question I would like you to explore the hypothesis that given that an area has two sports teams in different sports, those teams will perform the same within their respective sports. How I would like to see this explored is with a series of paired t-tests (so use ttest_rel) between all pairs of sports. Are there any sports where we can reject the null hypothesis? Again, average values where a sport has multiple teams in one region. Remember, you will only be including, for each sport, cities which have teams engaged in that sport, drop others as appropriate. This question is worth 20% of the grade for this assignment.

Code

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
import pandas as pd
import numpy as np
import scipy.stats as stats
import re

cities = pd.read_html("assets/wikipedia_data.html")[1]
cities = cities.iloc[:-1, [0, 3, 5, 6, 7, 8]]
cities.rename(columns={"Population (2016 est.)[8]": "Population"},
inplace=True)
cities['NFL'] = cities['NFL'].str.replace(r"\[.*\]", "")
cities['MLB'] = cities['MLB'].str.replace(r"\[.*\]", "")
cities['NBA'] = cities['NBA'].str.replace(r"\[.*\]", "")
cities['NHL'] = cities['NHL'].str.replace(r"\[.*\]", "")


def nhl_df():
Big4='NHL'
team = cities[Big4].str.extract('([A-Z]{0,2}[a-z0-9]*\ [A-Z]{0,2}[a-z0-9]*|[A-Z]{0,2}[a-z0-9]*)([A-Z]{0,2}[a-z0-9]*\ [A-Z]{0,2}[a-z0-9]*|[A-Z]{0,2}[a-z0-9]*)([A-Z]{0,2}[a-z0-9]*\ [A-Z]{0,2}[a-z0-9]*|[A-Z]{0,2}[a-z0-9]*)')
team['Metropolitan area']=cities['Metropolitan area']
team = pd.melt(team, id_vars=['Metropolitan area']).drop(columns=['variable']).replace("",np.nan).replace("—",np.nan).dropna().reset_index().rename(columns = {"value":"team"})
team=pd.merge(team,cities,how='left',on = 'Metropolitan area').iloc[:,1:4]
team = team.astype({'Metropolitan area': str, 'team': str, 'Population': int})
team['team']=team['team'].str.replace('[\w.]*\ ','')

_df=pd.read_csv("assets/"+str.lower(Big4)+".csv")
_df = _df[_df['year'] == 2018]
_df['team'] = _df['team'].str.replace(r'\*',"")
_df = _df[['team','W','L']]

dropList=[]
for i in range(_df.shape[0]):
row=_df.iloc[i]
if row['team']==row['W'] and row['L']==row['W']:
dropList.append(i)
_df=_df.drop(dropList)

_df['team'] = _df['team'].str.replace('[\w.]* ','')
_df = _df.astype({'team': str,'W': int, 'L': int})
_df['W/L%'] = _df['W']/(_df['W']+_df['L'])

merge=pd.merge(team,_df,'inner', on = 'team')
merge=merge.groupby('Metropolitan area').agg({'W/L%': np.nanmean, 'Population': np.nanmean})

return merge[['W/L%']]



def nba_df():
Big4='NBA'
team = cities[Big4].str.extract('([A-Z]{0,2}[a-z0-9]*\ [A-Z]{0,2}[a-z0-9]*|[A-Z]{0,2}[a-z0-9]*)([A-Z]{0,2}[a-z0-9]*\ [A-Z]{0,2}[a-z0-9]*|[A-Z]{0,2}[a-z0-9]*)([A-Z]{0,2}[a-z0-9]*\ [A-Z]{0,2}[a-z0-9]*|[A-Z]{0,2}[a-z0-9]*)')
team['Metropolitan area']=cities['Metropolitan area']
team = pd.melt(team, id_vars=['Metropolitan area']).drop(columns=['variable']).replace("",np.nan).replace("—",np.nan).dropna().reset_index().rename(columns = {"value":"team"})
team=pd.merge(team,cities,how='left',on = 'Metropolitan area').iloc[:,1:4]
team = team.astype({'Metropolitan area': str, 'team': str, 'Population': int})
team['team']=team['team'].str.replace('[\w.]*\ ','')

_df=pd.read_csv("assets/"+str.lower(Big4)+".csv")
_df = _df[_df['year'] == 2018]
_df['team'] = _df['team'].str.replace(r'[\*]',"")
_df['team'] = _df['team'].str.replace(r'\(\d*\)',"")
_df['team'] = _df['team'].str.replace(r'[\xa0]',"")
_df = _df[['team','W/L%']]
_df['team'] = _df['team'].str.replace('[\w.]* ','')
_df = _df.astype({'team': str,'W/L%': float})

merge=pd.merge(team,_df,'outer', on = 'team')
merge=merge.groupby('Metropolitan area').agg({'W/L%': np.nanmean, 'Population': np.nanmean})
return merge[['W/L%']]



def mlb_df():
Big4='MLB'
team = cities[Big4].str.extract('([A-Z]{0,2}[a-z0-9]*\ [A-Z]{0,2}[a-z0-9]*|[A-Z]{0,2}[a-z0-9]*)([A-Z]{0,2}[a-z0-9]*\ [A-Z]{0,2}[a-z0-9]*|[A-Z]{0,2}[a-z0-9]*)([A-Z]{0,2}[a-z0-9]*\ [A-Z]{0,2}[a-z0-9]*|[A-Z]{0,2}[a-z0-9]*)')
team['Metropolitan area']=cities['Metropolitan area']
team = pd.melt(team, id_vars=['Metropolitan area']).drop(columns=['variable']).replace("",np.nan).replace("—",np.nan).dropna().reset_index().rename(columns = {"value":"team"})
team=pd.merge(team,cities,how='left',on = 'Metropolitan area').iloc[:,1:4]
team = team.astype({'Metropolitan area': str, 'team': str, 'Population': int})
team['team']=team['team'].str.replace('\ Sox','Sox')
team['team']=team['team'].str.replace('[\w.]*\ ','')

_df=pd.read_csv("assets/"+str.lower(Big4)+".csv")
_df = _df[_df['year'] == 2018]
_df['team'] = _df['team'].str.replace(r'[\*]',"")
_df['team'] = _df['team'].str.replace(r'\(\d*\)',"")
_df['team'] = _df['team'].str.replace(r'[\xa0]',"")
_df = _df[['team','W-L%']]
_df.rename(columns={"W-L%": "W/L%"},inplace=True)
_df['team']=_df['team'].str.replace('\ Sox','Sox')
_df['team'] = _df['team'].str.replace('[\w.]* ','')
_df = _df.astype({'team': str,'W/L%': float})

merge=pd.merge(team,_df,'outer', on = 'team')
merge=merge.groupby('Metropolitan area').agg({'W/L%': np.nanmean, 'Population': np.nanmean})

return merge[['W/L%']]

def nfl_df():
Big4='NFL'
team = cities[Big4].str.extract('([A-Z]{0,2}[a-z0-9]*\ [A-Z]{0,2}[a-z0-9]*|[A-Z]{0,2}[a-z0-9]*)([A-Z]{0,2}[a-z0-9]*\ [A-Z]{0,2}[a-z0-9]*|[A-Z]{0,2}[a-z0-9]*)([A-Z]{0,2}[a-z0-9]*\ [A-Z]{0,2}[a-z0-9]*|[A-Z]{0,2}[a-z0-9]*)')
team['Metropolitan area']=cities['Metropolitan area']
team = pd.melt(team, id_vars=['Metropolitan area']).drop(columns=['variable']).replace("",np.nan).replace("—",np.nan).dropna().reset_index().rename(columns = {"value":"team"})
team=pd.merge(team,cities,how='left',on = 'Metropolitan area').iloc[:,1:4]
team = team.astype({'Metropolitan area': str, 'team': str, 'Population': int})
team['team']=team['team'].str.replace('[\w.]*\ ','')

_df=pd.read_csv("assets/"+str.lower(Big4)+".csv")
_df = _df[_df['year'] == 2018]
_df['team'] = _df['team'].str.replace(r'[\*]',"")
_df['team'] = _df['team'].str.replace(r'\(\d*\)',"")
_df['team'] = _df['team'].str.replace(r'[\xa0]',"")
_df = _df[['team','W-L%']]
_df.rename(columns={"W-L%": "W/L%"},inplace=True)
dropList=[]
for i in range(_df.shape[0]):
row=_df.iloc[i]
if row['team']==row['W/L%']:
dropList.append(i)
_df=_df.drop(dropList)

_df['team'] = _df['team'].str.replace('[\w.]* ','')
_df['team'] = _df['team'].str.replace('+','')
_df = _df.astype({'team': str,'W/L%': float})

merge=pd.merge(team,_df,'outer', on = 'team')
merge=merge.groupby('Metropolitan area').agg({'W/L%': np.nanmean, 'Population': np.nanmean})

return merge[['W/L%']]

def create_df(sport):
if sport =='NFL':
return nfl_df()
elif sport =='NBA':
return nba_df()
elif sport =='NHL':
return nhl_df()
elif sport =='MLB':
return mlb_df()
else:
print("ERROR with intput!")


def sports_team_performance():
# Note: p_values is a full dataframe, so df.loc["NFL","NBA"] should be the same as df.loc["NBA","NFL"] and
# df.loc["NFL","NFL"] should return np.nan
sports = ['NFL', 'NBA', 'NHL', 'MLB']
p_values = pd.DataFrame({k:np.nan for k in sports}, index=sports)

for i in sports:
for j in sports:
if i !=j :
merge=pd.merge(create_df(i),create_df(j),'inner', on = ['Metropolitan area'])
p_values.loc[i, j]=stats.ttest_rel(merge['W/L%_x'],merge['W/L%_y'])[1]


assert abs(p_values.loc["NBA", "NHL"] - 0.02) <= 1e-2, "The NBA-NHL p-value should be around 0.02"
assert abs(p_values.loc["MLB", "NFL"] - 0.80) <= 1e-2, "The MLB-NFL p-value should be around 0.80"
return p_values

结果

NFLNBANHLMLB
NFLNaN0.9375090.0303180.803459
NBA0.937509NaN0.0223860.949566
NHL0.0303180.022386NaN0.000703
MLB0.8034590.9495660.000703NaN


   所有assignment就这样结束啦,希望大家有所收获~
   大家其他还有需要的就在评论留言哦 :) 欢迎讨论分享~

------------------   The End    Thanks for reading   ------------------