Pandas作为大数据分析最流行的框架之一。用好Pandas就像大数据工程师用好SQL用好Excel一样重要。如果你打算学习 Python 中的数据分析、机器学习或数据科学工具,大概率绕不开Pandas库。Pandas 是一个用于 Python 数据操作和分析的开源库。
本篇通过总结一些最最常用的Pandas在具体场景的实战。在开始实战之前。
一开始我将对初次接触Pandas的同学们,一分钟介绍Pandas的主要内容。
最简单方法之一是,加载csv文件(格式类似Excel表文件),然后以多种方式对它们进行切片和切块:
Pandas加载电子表格并在 Python 中以编程方式操作它。 pandas 的核心是名叫DataFrame的对象类型- 本质上是一个值表,每行和每列都有一个标签。用read_csv加载这个包含来自音乐流服务的数据的基本 CSV 文件:
df = pandas.read_csv('music.csv')
现在变量df
是 pandas DataFrame:
我们可以使用其标签选择任何列:
使用数字选择一行或多行:
也可以使用列标签和行号来选择表的任何区域loc
:
使用特定值轻松过滤行。例如,这是Jazz音乐家:
以下是拥有超过 1,800,000 名听众的艺术家:
许多数据集可能存在缺失值。假设数据框有一个缺失值:
Pandas 提供了多种方法来处理这个问题。最简单的方法是删除缺少值的行:
fillna()
另一种方法是使用(例如,使用 0)填充缺失值。
使用特定条件对行进行分组并聚合其数据时。例如,按流派对数据集进行分组,看看每种流派有多少听众和剧目:
Pandas 将两个“爵士乐”行组合为一行,由于使用了sum()
聚合,因此它将两位爵士乐艺术家的听众和演奏加在一起,并在合并的爵士乐列中显示总和。
groupby()
折叠数据集并从中发现见解。聚合是也是统计的基本工具之一。
除了 sum()
,pandas 还提供了多种聚合函数,包括mean()
计算平均值、min()
、max()
和多个其他函数。
通常在数据分析过程中,发现需要从现有列中创建新列。Pandas轻松做到。
通过告诉 Pandas 将一列除以另一列,它识别到我们想要做的就是分别划分各个值(即每行的“Plays”值除以该行的“Listeners”值)。
本篇起始导入pandas库,后续的pd值的是pandas库
import pandas as py
"""making a dataframe"""
df = pd.DataFrame([[1, 2], [3, 4]], columns=list('AB'))
"""quick way to create an interesting data frame to try things out"""
df = pd.DataFrame(np.random.randn(5, 4), columns=['a', 'b', 'c', 'd'])
"""convert a dictionary into a DataFrame"""
"""make the keys into columns"""
df = pd.DataFrame(dic, index=[0])
"""make the keys into row index"""
df = pd.DataFrame.from_dict(dic, orient='index')
"""append two dfs"""
df.append(df2, ignore_index=True)
"""concat many dfs"""
pd.concat([pd.DataFrame([i], columns=['A']) for i in range(5)], ignore_index=True)
df['A'] """ will bring out a col """ df.ix[0] """will bring out a row, #0 in this case"""
"""to get an array from a data frame or a series use values, note it is not a function here, so no parans ()"""
point = df_allpoints[df_allpoints['names'] == given_point] # extract one point row.
point = point['desc'].values[0] # get its descriptor in array form.
"""Given a dataframe df to filter by a series s:"""
df[df['col_name'].isin(s)]
"""to do the same filter on the index instead of arbitrary column"""
df.ix[s]
""" display only certain columns, note it is a list inside the parans """
df[['A', 'B']]
"""drop rows with atleast one null value, pass params to modify
to atmost instead of atleast etc."""
df.dropna()
"""deleting a column"""
del df['column-name'] # note that df.column-name won't work.
"""making rows out of whole objects instead of parsing them into seperate columns"""
# Create the dataset (no data or just the indexes)
dataset = pandas.DataFrame(index=names)
# Add a column to the dataset where each column entry is a 1-D array and each row of “svd” is applied to a different DataFrame row
dataset['Norm']=svds
"""sort by value in a column"""
df.sort_values('col_name')
"""filter by multiple conditions in a dataframe df
parentheses!"""
df[(df['gender'] == 'M') & (df['cc_iso'] == 'US')]
"""filter by conditions and the condition on row labels(index)"""
df[(df.a > 0) & (df.index.isin([0, 2, 4]))]
"""regexp filters on strings (vectorized), use .* instead of *"""
df[df.category.str.contains(r'some.regex.*pattern')]
"""logical NOT is like this"""
df[~df.category.str.contains(r'some.regex.*pattern')]
"""creating complex filters using functions on rows: http://goo.gl/r57b1"""
df[df.apply(lambda x: x['b'] > x['c'], axis=1)]
"""Pandas replace operation http://goo.gl/DJphs"""
df[2].replace(4, 17, inplace=True)
df[1][df[1] == 4] = 19
"""apply and map examples"""
"""add 1 to every element"""
df.applymap(lambda x: x+1)
"""add 2 to row 3 and return the series"""
df.apply(lambda x: x[3]+2,axis=0)
"""add 1 to col a and return the series"""
df.apply(lambda x: x['a']+1,axis=1)
"""assigning some value to a slice is tricky as sometimes a copy is returned,
sometimes a view is returned based on numpy rules, more here:
http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-advanced"""
df.ix[df['part'].isin(ids), 'assigned_name'] = "some new value"
"""example of applying a complex external function
to each row of a data frame"""
def stripper(x):
l = re.findall(r'[0-9]+(?:\.[0-9]+){3}', x['Text with IP adress embedded'])
# you can take care of special
# cases and missing values, more than expected
# number of return values etc like this.
if l == []:
return ''
else:
return l[0]
df.apply(stripper, axis=1)
"""can pass extra args and named ones eg.."""
def subtract_and_divide(x, sub, divide=1):
return (x - sub) / divide
"""You may then apply this function as follows:"""
df.apply(subtract_and_divide, args=(5,), divide=3)
"""sort a groupby object by the size of the groups"""
dfl = sorted(dfg, key=lambda x: len(x[1]), reverse=True)
"""alternate syntax to sort groupby objects by size of groups"""
df[df['result']=='wrong'].groupby('classification')['classification'].count().reset_index(name='group_counts').sort_values(['group_counts'], ascending=False)
"""compute the means by group, and save mean to every element so group mean is available for every sample"""
sil_means = df.groupby('labels').mean()
df = df.join(sil_means, on='labels', rsuffix='_mean')
""" join doesn't work when names of cols are different, use merge instead, merge gets the job done most of the time """
mdf = pd.merge(pdf, udf, left_on='url', right_on='link')
"""groupby used like a histogram to obtain counts on sub-ranges of a variable, pretty handy"""
df.groupby(pd.cut(df.age, range(0, 130, 10))).size()
"""finding the distribution based on quantiles"""
df.groupby(pd.qcut(df.age, [0, 0.99, 1])
"""if you don't need specific bins like above, and just want to count number of each values"""
df.age.value_counts()
"""one liner to normalize a data frame"""
(df - df.mean()) / (df.max() - df.min())
"""iterating and working with groups is easy when you realize each group is itself a DataFrame"""
for name, group in dg:
print name, print(type(group))
"""grouping and applying a group specific function to each group element,
I think this could be simpler, but here is my current version"""
quantile = [0, 0.50, 0.75, 0.90, 0.95, 0.99, 1]
grouped = df.groupby(pd.qcut(df.age, quantile))
frame_list = []
for i, group in enumerate(grouped):
(label, frame) = group
frame['age_quantile'] = quantile[i + 1]
frame_list.append(frame)
df = pd.concat(frame_list)
"""misc: set display width, col_width etc for interactive pandas session"""
pd.set_option('display.width', 200)
pd.set_option('display.max_colwidth', 20)
pd.set_option('display.max_rows', 100)
"""sometimes you get an excel sheet with spaces in column names, super annoying"""
"""here: the nuclear option"""
df.columns = [c.lower().replace(' ', '_') for c in df.columns]
# to display a small df without any restrictions on the number of cols, rows.
# Please note the with statement, using this without it is not ideal ;-)
with pd.option_context('display.max_rows', None, 'display.max_columns', None): # more options can be specified also
print(df)
原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。
如有侵权,请联系 cloudcommunity@tencent.com 删除。
原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。
如有侵权,请联系 cloudcommunity@tencent.com 删除。