gfdl.model.EnsembleGFDLClassifier#
- class EnsembleGFDLClassifier(hidden_layer_sizes: ArrayLike = (100,), activation: str = 'identity', weight_scheme: str = 'uniform', seed: int = None, reg_alpha: float = None, rtol: float = None, voting: str = 'soft')[source]#
Bases:
ClassifierMixin,EnsembleGFDLEnsemble random vector functional link network classifier.
- Parameters:
- hidden_layer_sizesarray-like of shape (n_layers,)
The ith element represents the number of neurons in the ith hidden layer.
- activationstr, default=’identity’
Activation function for the hidden layers.
‘identity’, no-op activation, useful to implement linear bottleneck, returns f(x) = x
‘tanh’:
tanh.‘relu’:
relu.‘sigmoid’:
sigmoid.‘softmax’:
softmax.‘softmin’:
softmin.‘log_sigmoid’:
log_sigmoid.‘log_softmax’:
log_softmax.
- weight_schemestr, default=’uniform’
Distribution used to initialize the random hidden-layer weights.
The initialization functions generate weight matrices of shape (n_hidden_units, n_features), where values are drawn according to the selected scheme.
‘zeros’: set weights to zeros (
zeros).‘range’: set weights to normalized np.arange (
range).‘uniform’: uniform distribution (
uniform).‘he_uniform’: He uniform distribution (
he_uniform).‘lecun_uniform’: Lecun uniform distribution (
lecun_uniform).‘glorot_uniform’: Glorot uniform distribution (
glorot_uniform).‘normal’: Normal distribution (
normal).‘he_normal’: He normal distribution (
he_normal).‘lecun_normal’: Lecun normal distribution (
lecun_normal).‘glorot_normal’: Glorot normal distribution (
glorot_normal).
- seedint, default=`None`
Random seed used to initialize the network.
- reg_alphafloat, default=`None`
When None, use Moore-Penrose inversion to solve for the output weights of the network. Otherwise, it specifies the constant that multiplies the L2 term of sklearn Ridge, controlling the regularization strength. reg_alpha must be a non-negative float.
- rtolfloat, default=None
Cutoff for small singular values for the Moore-Penrose pseudo-inverse. Only applies when
reg_alpha=None. Whenrtol=None, the array API standard default forpinvis used.- votingstr, default=`”soft”`
Whether to use soft or hard voting in the ensemble.
Methods
fit(X, y)Train the ensemble of connected RVFL networks on the training set (X, y).
Get metadata routing of this object.
get_params([deep])Get parameters for this estimator.
predict(X)Predict class for X.
Predict class probabilities for X.
score(X, y[, sample_weight])Return accuracy on provided data and labels.
set_params(**params)Set the parameters of this estimator.
set_score_request(*[, sample_weight])Configure whether metadata should be requested to be passed to the
scoremethod.get_generator
Notes
The implementation is based on the one described by Shi et al. in [1].
References
[1]Shi, Katuwal, Suganthan, Tanveer, “Random vector functional link neural network based ensemble deep learning.” Pattern Recognition, vol. 117, pp. 107978, 2021, https://doi.org/10.1016/j.patcog.2021.107978.
Examples
>>> from sklearn.datasets import make_classification >>> from gfdl.model import EnsembleGFDLClassifier >>> X, y = make_classification(n_samples=1000, n_features=4, ... n_informative=2, n_redundant=0, ... random_state=0, shuffle=False) >>> clf = EnsembleGFDLClassifier(seed=0) >>> clf.fit(X, y) >>> print(clf.predict([[0, 0, 0, 0]])) [1]
- __init__(hidden_layer_sizes: ArrayLike = (100,), activation: str = 'identity', weight_scheme: str = 'uniform', seed: int = None, reg_alpha: float = None, rtol: float = None, voting: str = 'soft')[source]#
Methods
__init__([hidden_layer_sizes, activation, ...])fit(X, y)Train the ensemble of connected RVFL networks on the training set (X, y).
get_generator(seed)Get metadata routing of this object.
get_params([deep])Get parameters for this estimator.
predict(X)Predict class for X.
Predict class probabilities for X.
score(X, y[, sample_weight])Return accuracy on provided data and labels.
set_params(**params)Set the parameters of this estimator.
set_score_request(*[, sample_weight])Configure whether metadata should be requested to be passed to the
scoremethod.- fit(X, y)[source]#
Train the ensemble of connected RVFL networks on the training set (X, y).
- Parameters:
- Xarray-like of shape (n_samples, n_features)
The training input samples.
- yarray-like of shape (n_samples,) or (n_samples, n_outputs)
The target values.
- Returns:
- object
The fitted estimator.
- get_metadata_routing()#
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
- Returns:
- routingMetadataRequest
A
MetadataRequestencapsulating routing information.
- get_params(deep=True)#
Get parameters for this estimator.
- Parameters:
- deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns:
- paramsdict
Parameter names mapped to their values.
- predict(X)[source]#
Predict class for X.
- Parameters:
- Xarray-like of shape (n_samples, n_features)
The input samples.
- Returns:
- ndarray
The predicted classes, with shape (n_samples,) or (n_samples, n_outputs).
- predict_proba(X)[source]#
Predict class probabilities for X.
- Parameters:
- Xarray-like of shape (n_samples, n_features)
The input samples.
- Returns:
- ndarray
The class probabilities of the input samples. The order of the classes corresponds to that in the attribute
classes_. The ndarray should have shape (n_samples, n_classes).
- score(X, y, sample_weight=None)#
Return accuracy on provided data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
- Parameters:
- Xarray-like of shape (n_samples, n_features)
Test samples.
- yarray-like of shape (n_samples,) or (n_samples, n_outputs)
True labels for X.
- sample_weightarray-like of shape (n_samples,), default=None
Sample weights.
- Returns:
- scorefloat
Mean accuracy of
self.predict(X)w.r.t. y.
- set_params(**params)#
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline). The latter have parameters of the form<component>__<parameter>so that it’s possible to update each component of a nested object.- Parameters:
- **paramsdict
Estimator parameters.
- Returns:
- selfestimator instance
Estimator instance.
- set_score_request(*, sample_weight: bool | None | str = '$UNCHANGED$') EnsembleGFDLClassifier#
Configure whether metadata should be requested to be passed to the
scoremethod.Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with
enable_metadata_routing=True(seesklearn.set_config()). Please check the User Guide on how the routing mechanism works.The options for each parameter are:
True: metadata is requested, and passed toscoreif provided. The request is ignored if metadata is not provided.False: metadata is not requested and the meta-estimator will not pass it toscore.None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
- Parameters:
- sample_weightstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for
sample_weightparameter inscore.
- Returns:
- selfobject
The updated object.