”工欲善其事,必先利其器。“—孔子《论语.录灵公》
首页 > 编程 > 机器学习模型选择。

机器学习模型选择。

发布于2024-11-03
浏览:400

ML Model Selection.

1. Introduction

In this article we will learn how to choose the best model between multiple models with varying hyperparameters, in some cases we can have more than 50 different models, knowing how to choose one is important to get the best performant one for your dataset.

We will do model selection both by selecting the best learning algorithm and it's best hyperparameters.

But first what are hyperparameters? These are the additional settings that are set by the user and will affect how the model will learn it's parameters. Parameters on the other hand are what models learn during the training process.

2. Using Exhaustive Search.

Exhaustive Search involves selecting the best model by searching over a range of hyperparameters. To do this we make use of scikit-learn's GridSearchCV.

How GridSearchCV works:

  1. User defines sets of possible values for one or multiple hyperparameters.
  2. GridSearchCV trains a model using every value and /or combination of values.
  3. The model with the best performance is selected as the best model.

Example
We can set up a logistic regression as our learning algorithm and tune two hyperparameters, (C and the regularization penalty). We can also specify two parameters the solver and max iterations.

Now for each combination of C and regularization penalty values, we train the model and evaluate it using k-fold cross-validation.
Since we have 10 possible values of C, 2 possible values of reg. penalty and 5 folds we have a total of (10 x 2 x 5 = 100) candidate models from which the best is selected.

# Load libraries
import numpy as np
from sklearn import linear_model, datasets
from sklearn.model_selection import GridSearchCV

# Load data
iris = datasets.load_iris()
features = iris.data
target = iris.target

# Create logistic regression
logistic = linear_model.LogisticRegression(max_iter=500, solver='liblinear')

# Create range of candidate penalty hyperparameter values
penalty = ['l1','l2']

# Create range of candidate regularization hyperparameter values
C = np.logspace(0, 4, 10)

# Create dictionary of hyperparameter candidates
hyperparameters = dict(C=C, penalty=penalty)

# Create grid search
gridsearch = GridSearchCV(logistic, hyperparameters, cv=5, verbose=0)

# Fit grid search
best_model = gridsearch.fit(features, target)

# Show the best model
print(best_model.best_estimator_)

# LogisticRegression(C=7.742636826811269, max_iter=500, penalty='l1',
solver='liblinear') # Result

Getting the best model:

# View best hyperparameters
print('Best Penalty:', best_model.best_estimator_.get_params()['penalty'])
print('Best C:', best_model.best_estimator_.get_params()['C'])

# Best Penalty: l1 #Result
# Best C: 7.742636826811269 # Result

3. Using Randomized Search.

This is commonly used when you want a computationally cheaper method than exhaustive search to select the best model.

It's worth noting that the reason RandomizedSearchCV isn't inherently faster than GridSearchCV, but it often achieves comparable performance to GridSearchCV in less time just by testing fewer combinations.

How RandomizedSearchCV works:

  1. The user will supply hyperparameters / distributions (e.g normal, uniform).
  2. The algorithms will randomly search over a specific number of random combinations of the given hyperparameter values without replacement.

Example

# Load data
iris = datasets.load_iris()
features = iris.data
target = iris.target

# Create logistic regression
logistic = linear_model.LogisticRegression(max_iter=500, solver='liblinear')

# Create range of candidate regularization penalty hyperparameter values
penalty = ['l1', 'l2']

# Create distribution of candidate regularization hyperparameter values
C = uniform(loc=0, scale=4)

# Create hyperparameter options
hyperparameters = dict(C=C, penalty=penalty)

# Create randomized search
randomizedsearch = RandomizedSearchCV(
logistic, hyperparameters, random_state=1, n_iter=100, cv=5, verbose=0,
n_jobs=-1)

# Fit randomized search
best_model = randomizedsearch.fit(features, target)

# Print best model
print(best_model.best_estimator_)

# LogisticRegression(C=1.668088018810296, max_iter=500, penalty='l1',
solver='liblinear') #Result.

Getting the best model:

# View best hyperparameters
print('Best Penalty:', best_model.best_estimator_.get_params()['penalty'])
print('Best C:', best_model.best_estimator_.get_params()['C'])

# Best Penalty: l1 # Result
# Best C: 1.668088018810296 # Result

Note: The number of candidate models trained is specified in the n_iter (number of iterations) settings.

4. Selecting the Best Models from Multiple Learning Algorithms.

In this part we will look at how to select the best model by searching over a range of learning algorithms and their respective hyperparameters.

We can do this by simply creating a dictionary of candidate learning algorithms and their hyperparameters to use as the search space for GridSearchCV.

Steps:

  1. We can define a search space that includes two learning algorithms.
  2. We specify the hyperparameters and we define their candidate values using the format classifier[hyperparameter name]_.
# Load libraries
import numpy as np
from sklearn import datasets
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline

# Set random seed
np.random.seed(0)

# Load data
iris = datasets.load_iris()
features = iris.data
target = iris.target

# Create a pipeline
pipe = Pipeline([("classifier", RandomForestClassifier())])

# Create dictionary with candidate learning algorithms and their hyperparameters
search_space = [{"classifier": [LogisticRegression(max_iter=500,
solver='liblinear')],
"classifier__penalty": ['l1', 'l2'],
"classifier__C": np.logspace(0, 4, 10)},
{"classifier": [RandomForestClassifier()],
"classifier__n_estimators": [10, 100, 1000],
"classifier__max_features": [1, 2, 3]}]

# Create grid search
gridsearch = GridSearchCV(pipe, search_space, cv=5, verbose=0)

# Fit grid search
best_model = gridsearch.fit(features, target)

# Print best model
print(best_model.best_estimator_)

# Pipeline(steps=[('classifier',
                 LogisticRegression(C=7.742636826811269, max_iter=500,
                      penalty='l1', solver='liblinear'))])

The best model:
After the search is complete, we can use best_estimator_ to view the best model's learning algorithm and hyperparameters.

5. Selecting the Best Model When Preprocessing.

Sometimes we might want to include a preprocessing step during model selection.
The best solution is to create a pipeline that includes the preprocessing step and any of its parameters:

The First Challenge:
GridSeachCv uses cross-validation to determine the model with the highest performance.

However, in cross-validation we are pretending that the fold held out as the test set is not seen, and thus not part of fitting any preprocessing steps (e.g scaling or standardization).

For this reason the preprocessing steps must be a part of the set of actions taken by GridSearchCV.

The Solution
Scikit-learn provides the FeatureUnion which allows us to combine multiple preprocessing actions properly.
steps:

  1. We use _FeatureUnion _to combine two preprocessing steps: standardize the feature values(StandardScaler) and principal component analysis(PCA) - this object is called the preprocess and contains both of our preprocessing steps.
  2. Next we include preprocess in our pipeline with our learning algorithm.

This allows us to outsource the proper handling of fitting, transforming, and training the models with combinations of hyperparameters to scikit-learn.

Second Challenge:
Some preprocessing methods such as PCA have their own parameters, dimensionality reduction using PCA requires the user to define the number of principal components to use to produce the transformed features set. Ideally we would choose the number of components that produces a model with the greatest performance for some evaluation test metric.
Solution.
In scikit-learn when we include candidate component values in the search space, they are treated like any other hyperparameter to be searched over.

# Load libraries
import numpy as np
from sklearn import datasets
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler

# Set random seed
np.random.seed(0)

# Load data
iris = datasets.load_iris()
features = iris.data
target = iris.target

# Create a preprocessing object that includes StandardScaler features and PCA
preprocess = FeatureUnion([("std", StandardScaler()), ("pca", PCA())])

# Create a pipeline
pipe = Pipeline([("preprocess", preprocess),
               ("classifier", LogisticRegression(max_iter=1000,
               solver='liblinear'))])

# Create space of candidate values
search_space = [{"preprocess__pca__n_components": [1, 2, 3],
"classifier__penalty": ["l1", "l2"],
"classifier__C": np.logspace(0, 4, 10)}]

# Create grid search
clf = GridSearchCV(pipe, search_space, cv=5, verbose=0, n_jobs=-1)

# Fit grid search
best_model = clf.fit(features, target)

# Print best model
print(best_model.best_estimator_)

# Pipeline(steps=[('preprocess',
     FeatureUnion(transformer_list=[('std', StandardScaler()),
                                    ('pca', PCA(n_components=1))])),
    ('classifier',
    LogisticRegression(C=7.742636826811269, max_iter=1000,
                      penalty='l1', solver='liblinear'))]) # Result


After the model selection is complete we can view the preprocessing values that produced the best model.

Preprocessing steps that produced the best modes

# View best n_components

best_model.best_estimator_.get_params() 
# ['preprocess__pca__n_components'] # Results

5. Speeding Up Model Selection with Parallelization.

That time you need to reduce the time it takes to select a model.
We can do this by training multiple models simultaneously, this is done by using all the cores in our machine by setting n_jobs=-1

# Load libraries
import numpy as np
from sklearn import linear_model, datasets
from sklearn.model_selection import GridSearchCV

# Load data
iris = datasets.load_iris()
features = iris.data
target = iris.target

# Create logistic regression
logistic = linear_model.LogisticRegression(max_iter=500, 
                                           solver='liblinear')

# Create range of candidate regularization penalty hyperparameter values
penalty = ["l1", "l2"]

# Create range of candidate values for C
C = np.logspace(0, 4, 1000)

# Create hyperparameter options
hyperparameters = dict(C=C, penalty=penalty)

# Create grid search
gridsearch = GridSearchCV(logistic, hyperparameters, cv=5, n_jobs=-1, 
                             verbose=1)

# Fit grid search
best_model = gridsearch.fit(features, target)

# Print best model
print(best_model.best_estimator_)

# Fitting 5 folds for each of 2000 candidates, totalling 10000 fits
# LogisticRegression(C=5.926151812475554, max_iter=500, penalty='l1',
                                                  solver='liblinear')

6. Speeding Up Model Selection ( Algorithm Specific Methods).

This a way to speed up model selection without using additional compute power.

This is possible because scikit-learn has model-specific cross-validation hyperparameter tuning.

Sometimes the characteristics of a learning algorithms allows us to search for the best hyperparameters significantly faster.

Example:
LogisticRegression is used to conduct a standard logistic regression classifier.
LogisticRegressionCV implements an efficient cross-validated logistic regression classifier that can identify the optimum value of the hyperparameter C.

# Load libraries
from sklearn import linear_model, datasets

# Load data
iris = datasets.load_iris()
features = iris.data
target = iris.target

# Create cross-validated logistic regression
logit = linear_model.LogisticRegressionCV(Cs=100, max_iter=500,
                                            solver='liblinear')

# Train model
logit.fit(features, target)

# Print model
print(logit)

# LogisticRegressionCV(Cs=100, max_iter=500, solver='liblinear')

Note:A major downside to LogisticRegressionCV is that it can only search a range of values for C. This limitation is common to many of scikit-learn's model-specific cross-validated approaches.

I hope this Article was helpful in creating a quick overview of how to select a machine learning model.

版本声明 本文转载于:https://dev.to/oduor_arnold/ml-model-selection-1437?1如有侵犯,请联系[email protected]删除
最新教程 更多>
  • 您如何在Laravel Blade模板中定义变量?
    您如何在Laravel Blade模板中定义变量?
    在Laravel Blade模板中使用Elegance 在blade模板中如何分配变量对于存储以后使用的数据至关重要。在使用“ {{}}”分配变量的同时,它可能并不总是最优雅的解决方案。幸运的是,Blade通过@php Directive提供了更优雅的方法: $ old_section =“...
    编程 发布于2025-07-16
  • 如何将来自三个MySQL表的数据组合到新表中?
    如何将来自三个MySQL表的数据组合到新表中?
    mysql:从三个表和列的新表创建新表 答案:为了实现这一目标,您可以利用一个3-way Join。 选择p。*,d.content作为年龄 来自人为p的人 加入d.person_id = p.id上的d的详细信息 加入T.Id = d.detail_id的分类法 其中t.taxonomy =...
    编程 发布于2025-07-16
  • 在PHP中如何高效检测空数组?
    在PHP中如何高效检测空数组?
    在PHP 中检查一个空数组可以通过各种方法在PHP中确定一个空数组。如果需要验证任何数组元素的存在,则PHP的松散键入允许对数组本身进行直接评估:一种更严格的方法涉及使用count()函数: if(count(count($ playerList)=== 0){ //列表为空。 } 对...
    编程 发布于2025-07-16
  • 为什么我会收到MySQL错误#1089:错误的前缀密钥?
    为什么我会收到MySQL错误#1089:错误的前缀密钥?
    mySQL错误#1089:错误的前缀键错误descript [#1089-不正确的前缀键在尝试在表中创建一个prefix键时会出现。前缀键旨在索引字符串列的特定前缀长度长度,可以更快地搜索这些前缀。了解prefix keys `这将在整个Movie_ID列上创建标准主键。主密钥对于唯一识别...
    编程 发布于2025-07-16
  • CSS强类型语言解析
    CSS强类型语言解析
    您可以通过其强度或弱输入的方式对编程语言进行分类的方式之一。在这里,“键入”意味着是否在编译时已知变量。一个例子是一个场景,将整数(1)添加到包含整数(“ 1”)的字符串: result = 1 "1";包含整数的字符串可能是由带有许多运动部件的复杂逻辑套件无意间生成的。它也可以是故意从单个真理...
    编程 发布于2025-07-16
  • 如何在无序集合中为元组实现通用哈希功能?
    如何在无序集合中为元组实现通用哈希功能?
    在未订购的集合中的元素要纠正此问题,一种方法是手动为特定元组类型定义哈希函数,例如: template template template 。 struct std :: hash { size_t operator()(std :: tuple const&tuple)const {...
    编程 发布于2025-07-16
  • \“(1)vs.(;;):编译器优化是否消除了性能差异?\”
    \“(1)vs.(;;):编译器优化是否消除了性能差异?\”
    答案: 在大多数现代编译器中,while(1)和(1)和(;;)之间没有性能差异。编译器: perl: 1 输入 - > 2 2 NextState(Main 2 -E:1)V-> 3 9 Leaveloop VK/2-> A 3 toterloop(next-> 8 last-> 9 ...
    编程 发布于2025-07-16
  • 为什么不使用CSS`content'属性显示图像?
    为什么不使用CSS`content'属性显示图像?
    在Firefox extemers属性为某些图像很大,&& && && &&华倍华倍[华氏华倍华氏度]很少见,却是某些浏览属性很少,尤其是特定于Firefox的某些浏览器未能显示图像时未能显示图像时遇到了一个问题。这可以在提供的CSS类中看到:。googlepic { 内容:url(&#...
    编程 发布于2025-07-16
  • 为什么PHP的DateTime :: Modify('+1个月')会产生意外的结果?
    为什么PHP的DateTime :: Modify('+1个月')会产生意外的结果?
    使用php dateTime修改月份:发现预期的行为在使用PHP的DateTime类时,添加或减去几个月可能并不总是会产生预期的结果。正如文档所警告的那样,“当心”这些操作的“不像看起来那样直观。 考虑文档中给出的示例:这是内部发生的事情: 现在在3月3日添加另一个月,因为2月在2001年只有2...
    编程 发布于2025-07-16
  • 如何使用node-mysql在单个查询中执行多个SQL语句?
    如何使用node-mysql在单个查询中执行多个SQL语句?
    Multi-Statement Query Support in Node-MySQLIn Node.js, the question arises when executing multiple SQL statements in a single query using the node-mys...
    编程 发布于2025-07-16
  • 如何为PostgreSQL中的每个唯一标识符有效地检索最后一行?
    如何为PostgreSQL中的每个唯一标识符有效地检索最后一行?
    postgresql:为每个唯一标识符提取最后一行,在Postgresql中,您可能需要遇到与在数据库中的每个不同标识相关的信息中提取信息的情况。考虑以下数据:[ 1 2014-02-01 kjkj 在数据集中的每个唯一ID中检索最后一行的信息,您可以在操作员上使用Postgres的有效效率: ...
    编程 发布于2025-07-16
  • `console.log`显示修改后对象值异常的原因
    `console.log`显示修改后对象值异常的原因
    foo = [{id:1},{id:2},{id:3},{id:4},{id:id:5},],]; console.log('foo1',foo,foo.length); foo.splice(2,1); console.log('foo2', foo, foo....
    编程 发布于2025-07-16
  • Go语言垃圾回收如何处理切片内存?
    Go语言垃圾回收如何处理切片内存?
    Garbage Collection in Go Slices: A Detailed AnalysisIn Go, a slice is a dynamic array that references an underlying array.使用切片时,了解垃圾收集行为至关重要,以避免潜在的内存泄...
    编程 发布于2025-07-16
  • 将图片浮动到底部右侧并环绕文字的技巧
    将图片浮动到底部右侧并环绕文字的技巧
    在Web设计中围绕在Web设计中,有时可以将图像浮动到页面右下角,从而使文本围绕它缠绕。这可以在有效地展示图像的同时创建一个吸引人的视觉效果。 css位置在右下角,使用css float and clear properties: img { 浮点:对; ...
    编程 发布于2025-07-16
  • C++20 Consteval函数中模板参数能否依赖于函数参数?
    C++20 Consteval函数中模板参数能否依赖于函数参数?
    [ consteval函数和模板参数依赖于函数参数在C 17中,模板参数不能依赖一个函数参数,因为编译器仍然需要对非contexexpr futcoriations contim at contexpr function进行评估。 compile time。 C 20引入恒定函数,必须在编译时进行...
    编程 发布于2025-07-16

免责声明: 提供的所有资源部分来自互联网,如果有侵犯您的版权或其他权益,请说明详细缘由并提供版权或权益证明然后发到邮箱:[email protected] 我们会第一时间内为您处理。

Copyright© 2022 湘ICP备2022001581号-3