首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >转换基元的深度特征合成深度

转换基元的深度特征合成深度
EN

Stack Overflow用户
提问于 2020-12-25 14:08:20
回答 2查看 672关注 0票数 2

我试图使用特性工具库在一个简单的数据集上生成新的特性,但是,每当我尝试使用更大的max_depth时,都不会发生任何事情。到目前为止,我的代码如下:

代码语言:javascript
运行
复制
# imports
import featuretools as ft

# creating the EntitySet
es = ft.EntitySet()
es.entity_from_dataframe(entity_id='data', dataframe=data, make_index=True, index='index')

# Run deep feature synthesis with transformation primitives
feature_matrix, feature_defs = ft.dfs(entityset=es, target_entity='data', max_depth=3,
                                      trans_primitives=['add_numeric', 'multiply_numeric'])

当我看到创建的特性时,我得到了基本的东西f1*f2f1+f2,但是我想要更复杂的工程特性,比如f2*(f1+f2)f1+(f2+f1)。我原以为增加max_depth会做到这一点,但显然并非如此。

如果有的话,我怎么能这么做?

EN

回答 2

Stack Overflow用户

回答已采纳

发布于 2020-12-27 21:51:26

我已经设法回答了我自己的问题,所以我会把它张贴在这里。

您可以通过在已经生成的特性上运行“深度功能综合”来创建更深层次的特性。下面是一个示例:

代码语言:javascript
运行
复制
# imports
import featuretools as ft

# creating the EntitySet
es = ft.EntitySet()
es.entity_from_dataframe(entity_id='data', dataframe=data, make_index=True, index='index')

# Run deep feature synthesis with transformation primitives
feature_matrix, feature_defs = ft.dfs(entityset=es, target_entity='data',
                                      trans_primitives=['add_numeric','multiply_numeric'])

# creating an EntitySet from the new features
deep_es = ft.EntitySet()
deep_es.entity_from_dataframe(entity_id='data', index='index', dataframe=feature_matrix)

# Run deep feature synthesis with transformation primitives
deep_feature_matrix, deep_feature_defs=ft.dfs(entityset=deep_es, target_entity='data',
                                              trans_primitives=['add_numeric','multiply_numeric'])

现在,看看我们在这里看到的deep_feature_matrix列(假设数据集具有两个特性):

"f1", "f2", "f1+f2", "f1*f2", "f1+f1*f2", "f1+f1+f2", "f1*f2+f1+f2", "f1*f2+f2", "f1+f2+f2", "f1*f1*f2", "f1*f1+f2", "f1*f2*f1+f2", "f1*f2*f2", "f1+f2*f2"

我还创建了一个自动执行此操作的函数(包括一个完整的docstring):

代码语言:javascript
运行
复制
def auto_feature_engineering(X, y, selection_percent=0.1, selection_strategy="best", num_depth_steps=2, transformatives=['divide_numeric', 'multiply_numeric']):
    """
    Automatically perform deep feature engineering and 
    feature selection.

    Parameters
    ----------
    X : pd.DataFrame
        Data to perform automatic feature engineering on.
    y : pd.DataFrame
        Target variable to find correlations of all
        features at each depth step to perform feature
        selection, y is not needed if selection_percent=1.
    selection_percent : float, optional
        Defines what percent of all the new features to
        keep for the next depth step.
    selection_strategy : {'best', 'random'}, optional
        Strategy used for feature selection, if 'best', 
        it will select the best features for the next depth
        step, if 'random', it will select features at random.
    num_depth_steps : integer, optional
        The number of depth steps. Every depth step, the model
        generates brand new features from the features made in 
        the last step, then selects a percent of these new
        features.
    transformatives : list, optional
        List of all possible transformations of the data to use
        when feature engineering, you can find the full list
        of possible transformations as well as what each one
        does using the following code: 
        `ft.primitives.list_primitives()[ft.primitives.list_primitives()["type"]=="transform"]`
        make sure to `import featuretools as ft`.

    Returns
    -------
    pd.DataFrame
        a dataframe of the brand new features.
    """
    from sklearn.feature_selection import mutual_info_classif
    selected_feature_df = X.copy()
    for i in range(num_depth_steps):
        
        # Perform feature engineering
        es = ft.EntitySet()
        es.entity_from_dataframe(entity_id='data', dataframe=selected_feature_df, 
                                 make_index=True, index='index')
        feature_matrix, feature_defs = ft.dfs(entityset=es, target_entity='data', trans_primitives=transformatives)
        
        # Remove features that are the same
        feature_corrs = feature_matrix.corr()[list(feature_matrix.keys())[0]]
        
        existing_corrs = []
        good_keys = []
        for key in feature_corrs.to_dict().keys():
            if feature_corrs[key] not in existing_corrs:
                existing_corrs.append(feature_corrs[key])
                good_keys.append(key)
        feature_matrix = feature_matrix[good_keys]
        
        # Remove illegal features
        legal_features = list(feature_matrix.columns)
        for feature in list(feature_matrix.columns):
            raw_feature_list = []
            for j in range(len(feature.split(" "))):
                if j%2==0:
                    raw_feature_list.append(feature.split(" ")[j])
            if len(raw_feature_list) > i+2: # num_depth_steps = 1, means max_num_raw_features_in_feature = 2
                legal_features.remove(feature)
        feature_matrix = feature_matrix[legal_features]
        
        # Perform feature selection
        if int(selection_percent)!=1:
            if selection_strategy=="best":
                corrs = mutual_info_classif(feature_matrix.reset_index(drop=True), y)
                corrs = pd.Series(corrs, name="")
                selected_corrs = corrs[corrs>=corrs.quantile(1-selection_percent)]
                selected_feature_df = feature_matrix.iloc[:, list(selected_corrs.keys())].reset_index(drop=True)
            elif selection_strategy=="random":
                selected_feature_df = feature_matrix.sample(frac=(selection_percent), axis=1).reset_index(drop=True)
            else:
                raise Exception("selection_strategy can be either 'best' or 'random', got '"+str(selection_strategy)+"'.")
        else:
            selected_feature_df = feature_matrix.reset_index(drop=True)
        if num_depth_steps!=1:
            rename_dict = {}
            for col in list(selected_feature_df.columns):
                rename_dict[col] = "("+col+")"
            selected_feature_df = selected_feature_df.rename(columns=rename_dict)
    if num_depth_steps!=1:
        rename_dict = {}
        for feature_name in list(selected_feature_df.columns):
            rename_dict[feature_name] = feature_name[int(num_depth_steps-1):-int(num_depth_steps-1)]
        selected_feature_df = selected_feature_df.rename(columns=rename_dict)
    return selected_feature_df

下面是一个使用它的示例:

代码语言:javascript
运行
复制
# Imports
>>> import seaborn as sns
>>> import pandas as pd
>>> import numpy as np
>>> from sklearn.preprocessing import OrdinalEncoder

# Load the penguins dataset
>>> penguins = sns.load_dataset("penguins")
>>> penguins.head()

  species     island  bill_length_mm  bill_depth_mm  flipper_length_mm  body_mass_g     sex
0  Adelie  Torgersen            39.1           18.7              181.0       3750.0    Male
1  Adelie  Torgersen            39.5           17.4              186.0       3800.0  Female
2  Adelie  Torgersen            40.3           18.0              195.0       3250.0  Female 
3  Adelie  Torgersen             NaN            NaN                NaN          NaN     NaN
4  Adelie  Torgersen            36.7           19.3              193.0       3450.0  Female

# Fill in NaN values of features using the distribution of the feature
>>> for feature in ["bill_length_mm", "bill_depth_mm", "flipper_length_mm", "body_mass_g", "sex"]:
...     s = penguins[feature].value_counts(normalize=True)
...     dist = penguins[feature].value_counts(normalize=True).values
...     missing = penguins[feature].isnull()
...     penguins.loc[missing, feature] = np.random.choice(s.index, size=len(penguins[missing]),p=s.values)

# Make X and y
>>> X = penguins[["bill_length_mm", "bill_depth_mm", "flipper_length_mm", "body_mass_g"]]
>>> y = penguins[["sex"]]

# Encode "sex" so that "Male" is 1 and "Female" is 0
>>> ord_enc = OrdinalEncoder()
>>> y = pd.DataFrame(ord_enc.fit_transform(y).astype(np.int8), columns=["sex"])

# Generate new dataset with more features
>>> penguins_with_more_features = auto_feature_engineering(X, y, selection_percent=1.)

# Correlations of the raw features
>>> find_correlations(X, y)
body_mass_g          0.422959
bill_depth_mm        0.353526
bill_length_mm       0.342109
flipper_length_mm    0.246944
Name: sex, dtype: float64

# Top 10% correlations of new features
>>> summarize_corr_series(find_top_percent(find_correlations(penguins_with_more_features, y), 0.1))
(flipper_length_mm / bill_depth_mm) / (body_mass_g):       0.7241123396175027
(bill_depth_mm * body_mass_g) / (flipper_length_mm):       0.7237223914820166
(bill_depth_mm * body_mass_g) * (bill_depth_mm):           0.7222108721971968
(bill_depth_mm * body_mass_g):                             0.7202272416625914
(bill_depth_mm * body_mass_g) * (flipper_length_mm):       0.6425813490692588
(bill_depth_mm * bill_length_mm) * (body_mass_g):          0.6398235593646668
(bill_depth_mm * flipper_length_mm) * (flipper_length_mm): 0.6360645935216128
(bill_depth_mm * flipper_length_mm):                       0.6083364815975281
(bill_depth_mm * body_mass_g) * (body_mass_g):             0.5888925994060027

在这个例子中,鉴于企鹅的属性body_mass_gbill_depth_mmbill_length_mmflipper_length_mm,我们想要预测它们的性别。

您可能会注意到我在示例中使用的其他神秘函数,即find_correlationssummarize_corr_seriesfind_top_percent。这些是我为帮助总结来自auto_feature_engineering的结果而做的其他方便的函数。下面是给他们的代码(请注意,他们还没有被记录在案):

代码语言:javascript
运行
复制
def summarize_corr_series(feature_corr_series):
    max_feature_name_size = 0
    for key in feature_corr_series.to_dict().keys():
        if len(key) > max_feature_name_size:
            max_feature_name_size = len(key)

    max_new_feature_corr = feature_corr_series.max()

    for key in feature_corr_series.to_dict().keys():
        whitespace = []
        for i in range(max_feature_name_size-len(key)):
            whitespace.append(" ")
        whitespace = "".join(whitespace)
        print(key+": "+whitespace+str(abs(feature_corr_series[key])))

def find_top_percent(series, percent):
    return series[series>series.quantile(1-percent)]

def find_correlations(X, y):
    return abs(pd.concat([X.reset_index(drop=True), y.reset_index(drop=True)], axis=1).corr())[y.columns[0]].drop(y.columns[0]).sort_values(ascending=False)
票数 3
EN

Stack Overflow用户

发布于 2021-03-09 16:38:54

非常不幸的是,featuretools不容易支持这个用例,因为它看起来很常见。我发现最好的方法是使用dfs函数创建您想要的一阶特性,然后手动添加您想要的二阶特性。

例如,下面的MWE (使用虹膜数据集)使用AddNumeric执行dfs原语,然后仅使用原始特性将DivideNumeric原语应用于新创建的特性(并避免在转换后的特征中多次出现相同的基本特征)。

代码语言:javascript
运行
复制
import numpy as np
import pandas as pd
import sklearn
import featuretools as ft

iris = sklearn.datasets.load_iris()

data = pd.DataFrame(
    data= np.c_[iris['data'], 
    iris['target']],
    columns= iris['feature_names'] + ['target']
)

ignore_cols = ['target']

entity_set = ft.EntitySet(id="iris")
entity_set.entity_from_dataframe(
    entity_id="iris_main",
    dataframe=data,
    index="index",
)

new_features = ft.dfs(
    entityset=entity_set,
    target_entity="iris_main",
    trans_primitives=["add_numeric"],
    features_only=True,
    primitive_options={
        "add_numeric": {
            "ignore_variables": {"iris_main": ignore_cols},
        },
    },
)

transformed_features = [i for i in new_features if isinstance(i, ft.feature_base.feature_base.TransformFeature)]
original_features = [i for i in new_features if isinstance(i, ft.feature_base.feature_base.IdentityFeature) and i.get_name() not in ignore_cols]

depth_two_features = []
for trans_feat in transformed_features:
    for orig_feat in original_features:
        if orig_feat.get_name() not in [i.get_name() for i in trans_feat.base_features]:
            feat = ft.Feature([trans_feat, orig_feat], primitive=ft.primitives.DivideNumeric)
            depth_two_features.append(feat)
            
data = ft.calculate_feature_matrix(
    features= original_features + transformed_features + depth_two_features,
    entityset=entity_set,
    verbose=True,
)

这种方法的好处是,它为您提供了更细粒度的控制,以定制您想要的方式,并避免了创建不需要的不必要特性的计算成本。

票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/65448806

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档