示例三(3)——人物画像特征提取

前言:一个人的信用评级一般用人物画像来评判,如何从很多的人物特征中提取有用的特征呢? 下面以一个金融反欺诈模型为例子来对特征提取有一个简单的理解。 数据下载地址:Notes offered by Prospectus (https://www.lendingclub.com/info/prospectus.action) 一共有145行特征, 1删除了肉眼看的见的空值列

import pandas as pd
import numpy as np
import sys

df = pd.read_csv('./data/LoanStats3a.csv', skiprows = 1, low_memory = True)#skiprows跳过第一行,low_memory低内存加载,报错就该成False
'''读入接待信息'''
# print(df.head(10))
# print(df.info())
'''查看数据特征表格信息'''
df.drop('id', axis = 1, inplace = True)
df.drop('member_id', axis = 1, inplace = True)

2清洗数据,去除特征中的特殊字符

df.term.replace(to_replace = '[^0-9]+', value = '', inplace = True, regex = True)#regex正则打开
df.int_rate.replace('%', value = '', inplace = True)#去不掉说明就是浮点型
df.drop('sub_grade', axis = 1, inplace = True)
df.drop('emp_title', axis = 1, inplace = True)

df.emp_length.replace('n/a', np.nan, inplace = True)
df.emp_length.replace(to_replace = '[^0-9]+', value = '', inplace = True, regex = True)
#这一步是必须做的,这样做以后才能,用info查看
df.dropna(axis = 1, how = 'all', inplace = True)
df.dropna(axis = 0, how = 'all', inplace = True)

3删除空值较多的列

'''debt_settlement_flag_date     98 non-null object
settlement_status             155 non-null object
settlement_date               155 non-null object
settlement_amount             155 non-null float64
settlement_percentage         155 non-null float64
settlement_term               155 non-null float64'''
df.drop(['debt_settlement_flag_date','settlement_status','settlement_date',\
         'settlement_amount','settlement_percentage',\
         'settlement_term'], axis = 1, inplace = True)

4删除不为空,但是重复较多的列;先删float,再删object

# for col in df.select_dtypes(include = ['float']).columns:
#     print('col {} has {}'.format(col, len(df[col].unique())))

'''
col delinq_2yrs has 13
col inq_last_6mths has 29
col mths_since_last_delinq has 96
col mths_since_last_record has 114
col open_acc has 45
col pub_rec has 7

col total_acc has 84
col out_prncp has 1
col out_prncp_inv has 1

col collections_12_mths_ex_med has 2
col policy_code has 1
col acc_now_delinq has 3
col chargeoff_within_12_mths has 2
col delinq_amnt has 4
col pub_rec_bankruptcies has 4
col tax_liens has 3
'''
df.drop(['delinq_2yrs','inq_last_6mths','mths_since_last_delinq',\
         'mths_since_last_record','open_acc','pub_rec','total_acc',\
         'out_prncp','out_prncp_inv','collections_12_mths_ex_med',\
         'policy_code','acc_now_delinq','chargeoff_within_12_mths',\
         'delinq_amnt','pub_rec_bankruptcies',\
         'tax_liens'], axis = 1, inplace = True)

'''删除objetct类型中数据重复较多的值'''
# for col in df.select_dtypes(include = ['object']).columns:
    # print('col {} has {}'.format(col, len(df[col].unique())))

'''
col term has 2
col grade has 7
col emp_length has 11
col home_ownership has 5
col verification_status has 3
col issue_d has 55

col pymnt_plan has 1
col purpose has 1
col zip_code has 837
col addr_state has 50
col earliest_cr_line has 531
col initial_list_status has 1

col last_pymnt_d has 113
col next_pymnt_d has 99
col last_credit_pull_d has 125
col application_type has 1
col hardship_flag has 1
col disbursement_method has 1
col debt_settlement_flag has 2
''' 
df.drop(['term','grade','emp_length','home_ownership','verification_status'\
         ,'issue_d','pymnt_plan','purpose','zip_code','addr_state',\
         'earliest_cr_line','initial_list_status','last_pymnt_d',\
         'next_pymnt_d','last_credit_pull_d','application_type','hardship_flag',
         'disbursement_method','debt_settlement_flag'], axis = 1, inplace = True)

df.drop(['desc','title'], axis = 1, inplace = True)

5标签二值化

df.loan_status.replace('Fully Paid', value = int(1), inplace = True)
df.loan_status.replace('Charged Off', value = int(0), inplace = True)
df.loan_status.replace('Does not meet the credit policy. Status:Fully Paid', \
                       np.nan, inplace = True)
df.loan_status.replace('Does not meet the credit policy. Status:Charged Off', \
                       np.nan, inplace = True)
'''删除标签为空的实力,大概删除了3000个不到的实力'''
df.dropna(subset = ['loan_status'], how = 'any', inplace = True)

6把样本中的空值用0.0去填充

df.fillna(0.0, inplace = True)

7计算清洁后样本数据的相关性,删除相关系数大于0.95的列

cor = df.corr()#协方差矩阵
# cor.iloc[:, :] = np.tril(cor, k= -1)
# cor = cor.stack()
# print(cor[(cor>0.55)|(cor<-0.55)])
# sys.exit(0)
'''loan_amnt
funded_amnt
total_pymnt'''
'''删除相关系数大于0.95的列'''
df.drop(['loan_amnt','funded_amnt','total_pymnt'], axis = 1, inplace = True)
print(df.info())#revol_util                 39786 non-null object"%"会默认为object,其实他是数值
# sys.exit(0)
df = pd.get_dummies(df)#哑变量
df.to_csv('./data/feature03.csv')

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

发表于

我来说两句

0 条评论
登录 后参与评论

相关文章

来自专栏marsggbo

Udacity并行计算课程笔记- Fundamental GPU Algorithms (Reduce, Scan, Histogram)

如下图示,第一种情况只有一个工人挖洞,他需要8小时才能完成,所以工作总量(Work)是8小时。第二种情况是有4个工人,它们2个小时就能完成挖洞任务,此时工作总量...

15010
来自专栏深度学习之tensorflow实战篇

tensorflow载入数据的三种方式 之 TF生成数据的方法

Tensorflow数据读取有三种方式: Preloaded data: 预加载数据 Feeding: Python产生数据,再把数据喂给后端。 Reading...

44340
来自专栏软件开发 -- 分享 互助 成长

经典算法学习之回溯法

回溯法的应用范围:只要能把待求解的问题分成不太多的步骤,每个步骤又只有不太多的选择就可以考虑使用回溯法。  若用回溯法求问题的所有解时,要回溯到根,且根结点的所...

24580
来自专栏bboysoul

1475: C语言实验题――一元二次方程 II

描述:求一元二次方程ax2+bx+c=0的解。a,b,c为任意实数。 输入:输入数据有一行,包括a b c的值 输出:按以下格式输出方程的根x1和x2。x1...

16730
来自专栏数据魔术师

运筹学教学 | 十分钟教你求解分配问题(assignment problem)

biu~ biu~ biu~ 我们的运筹学教学推文又出新文拉 还是熟悉的配方,熟悉的味道 今天向大家推出的是 运筹学教学--第六弹 分配问题(Assignmen...

1.6K80
来自专栏王亚昌的专栏

A*算法C实现

参考 http://www.cppblog.com/christanxw/archive/2006/04/07/5126.html 实现了A*算法,模拟了一下,...

10620
来自专栏数据结构与算法

Day4晚笔记

数据结构 并查集:捆绑两个点的信息,判断对错 倍增:LCA, 字符串 hash,模拟, 最小表示法 给定一个环状字符串,切开,使得字符串的字典序最小 图和树 割...

27040
来自专栏freesan44

python 算法开发笔记

21120
来自专栏CVPy

利用 Python 优雅地可视化数据

最近看《机器学习系统设计》的前两章,学到了一些用Matplotlib进行数据可视化的方法。在这里整理一下。

91600
来自专栏生信小驿站

Python数据处理从零开始----第四章(可视化)(7)(多图合并)目录正文

=========================================================

9510

扫码关注云+社区

领取腾讯云代金券