[파이썬] lightgbm 피처 엔지니어링 및 선택 전략

LightGBM Logo

LightGBM is a library for gradient boosting machines that is widely used for various machine learning tasks. In this blog post, we will discuss feature engineering and selection strategies using LightGBM in Python.

Feature Engineering

Feature engineering is the process of transforming raw data into features that can be used by machine learning algorithms. It often involves creating new features, transforming existing features, or selecting relevant features for the model. Here are some common feature engineering techniques with LightGBM:

  1. One-Hot Encoding: Encode categorical variables into binary vectors of 0s and 1s. This can be done using the pd.get_dummies() function in pandas or the OneHotEncoder class in scikit-learn.
import pandas as pd
from sklearn.preprocessing import OneHotEncoder

# Load data
data = pd.read_csv('data.csv')

# One-hot encoding
encoded_data = pd.get_dummies(data)
  1. Binning: Convert continuous variables into categorical variables by grouping them into bins. This can be done using the pd.cut() function in pandas.
import pandas as pd

# Load data
data = pd.read_csv('data.csv')

# Binning
data['age_bin'] = pd.cut(data['age'], bins=[0, 18, 35, 60, 100], labels=['young', 'adult', 'middle-aged', 'elderly'])
  1. Feature Scaling: Scale numerical features to make them comparable or to have a similar range. LightGBM usually doesn’t require feature scaling, but it can still be beneficial in some cases.
import pandas as pd
from sklearn.preprocessing import MinMaxScaler

# Load data
data = pd.read_csv('data.csv')

# Feature scaling
scaler = MinMaxScaler()
scaled_data = scaler.fit_transform(data)
data = pd.DataFrame(scaled_data, columns=data.columns)

Feature Selection

Feature selection is the process of selecting a subset of relevant features for the model. It helps to improve model performance, reduce overfitting, and speed up training. Here are some feature selection strategies with LightGBM:

  1. Feature Importance: Use the built-in feature importance score of LightGBM to rank the importance of features. You can access the feature importance scores using the feature_importances_ attribute of the trained model.
import lightgbm as lgbm

# Train model
model = lgbm.LGBMClassifier()
model.fit(X_train, y_train)

# Get feature importances
feature_importances = model.feature_importances_
  1. SelectFromModel: Use the SelectFromModel class in scikit-learn to select features based on a threshold importance score. You can set the threshold parameter to control the number of selected features.
import lightgbm as lgbm
from sklearn.feature_selection import SelectFromModel

# Train model
model = lgbm.LGBMRegressor()
model.fit(X_train, y_train)

# Select features
selector = SelectFromModel(model, threshold=0.1)
selected_data = selector.transform(X_train)
  1. Correlation Analysis: Analyze the correlation between features and the target variable. Remove features that have low correlation or high correlation with other features.
import pandas as pd

# Load data
data = pd.read_csv('data.csv')

# Compute correlation matrix
correlation_matrix = data.corr()

# Remove features with low correlation
selected_data = data.drop(['feature1', 'feature2'], axis=1)

In conclusion, feature engineering and selection play crucial roles in building effective machine learning models. LightGBM, with its speed and accuracy, is a great tool for these tasks. Apply the techniques and strategies outlined in this blog post to improve your models’ performance with LightGBM in Python.

Happy coding!